From 7f3bd98cc2650a2afbada9e6f1156fc858fe2956 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Fri, 2 Aug 2024 22:38:03 +0000 Subject: [PATCH] Deployed 6a9898f to selection-refactor-metcouncil-test with MkDocs 1.5.3 and mike 1.1.2 --- .../data_models/index.html | 4 ++-- .../search/search_index.json | 2 +- selection-refactor-metcouncil-test/sitemap.xml | 10 +++++----- .../sitemap.xml.gz | Bin 275 -> 275 bytes 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/selection-refactor-metcouncil-test/data_models/index.html b/selection-refactor-metcouncil-test/data_models/index.html index afd66358..3b42aa04 100644 --- a/selection-refactor-metcouncil-test/data_models/index.html +++ b/selection-refactor-metcouncil-test/data_models/index.html @@ -3045,13 +3045,13 @@

A: Series[int] = pa.Field(nullable=False, coerce=True) B: Series[int] = pa.Field(nullable=False, coerce=True) geometry: GeoSeries = pa.Field(nullable=False) - name: Series[str] = pa.Field(nullable=True) + name: Series[str] = pa.Field(nullable=False) rail_only: Series[bool] = pa.Field(coerce=True, nullable=False, default=False) bus_only: Series[bool] = pa.Field(coerce=True, nullable=False, default=False) drive_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True) bike_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True) walk_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True) - distance: Series[float] = pa.Field(coerce=True, nullable=True) + distance: Series[float] = pa.Field(coerce=True, nullable=False) roadway: Series[str] = pa.Field(nullable=False, default="road") managed: Series[int] = pa.Field(coerce=True, nullable=False, default=0) diff --git a/selection-refactor-metcouncil-test/search/search_index.json b/selection-refactor-metcouncil-test/search/search_index.json index 863bc9f1..e50f0fa4 100644 --- a/selection-refactor-metcouncil-test/search/search_index.json +++ b/selection-refactor-metcouncil-test/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"

Network Wrangler is a Python library for managing travel model network scenarios.

"},{"location":"#system-requirements","title":"System Requirements","text":"

Network Wrangler should be operating system agonistic and has been tested on Ubuntu and Mac OS.

Network Wrangler does require Python 3.7+. If you have a different version of Python installed (e.g. from ArcGIS), conda or a similar virtual environment manager can care of installing it for you in the installation instructions below.

installing conda

In order to assist in installation, its helpful to have miniconda or another virtual environment manager installed to make sure the network wrangler dependencies don\u2019t interfer with any other package requirements you have. If you don\u2019t have any of these already, we recommend starting with Miniconda as it has the smallest footprint. conda is the environment manager that is contained within both the Anaconda and mini-conda applications.

"},{"location":"#installation","title":"Installation","text":"

Requirements for basic network_wranglerare functionality as well as enhanced development/testing, visualization and documentation functionalities are stored in requirements*.txt and pyproject.toml but are automatically installed when using pip.

create a new conda environment for wrangler

conda config --add channels conda-forge\nconda create python=3.7 -n wrangler\nconda activate wrangler\n

tricky dependencies

rtree, geopandas and osmnx can have some tricky co-dependencies. If don\u2019t already have an up-to-date installation of them, we\u2019ve had the best success installing them using conda (as opposed to pip).

conda install rtree geopandas osmnx\n

Ready to install network wrangler?

Latest Official VersionFrom GitHubFrom Clone
pip install network-wrangler\n
pip install git+https://github.com/wsp-sag/network_wrangler.git@master#egg=network_wrangler\n

Note

If you wanted to install from a specific tag/version number or branch, replace @master with @<branchname> or @tag

If you are going to be working on Network Wrangler locally, you might want to clone it to your local machine and install it from the clone. The -e will install it in editable mode.

If you have GitHub desktop installed, you can either do this by using the GitHub user interface by clicking on the green button \u201cclone or download\u201d in the main network wrangler repository page.

Otherwise, you can use the command prompt to navigate to the directory that you would like to store your network wrangler clone and then using a git command to clone it.

cd path to where you want to put wrangler\ngit clone https://github.com/wsp-sag/network_wrangler\n

Expected output:

cloning into network_wrangler...\nremote: Enumerating objects: 53, done.\nremote: Counting objects: 100% (53/53), done.\nremote: Compressing objects: 100% (34/34), done.\nremote: Total 307 (delta 28), reused 29 (delta 19), pack-reused 254\nReceiving objects: 100% (307/307), 15.94 MiB | 10.49 MiB/s, done.\nResolving deltas: 100% (140/140), done.\n

Then you should be able to install Network Wrangler in \u201cdevelop\u201d mode.

Navigate your command prompt into the network wrangler folder and then install network wrangler in editable mode. This will take a few minutes because it is also installing all the prerequisites.

cd network_wrangler\npip install -e .\n
"},{"location":"#common-installation-issues","title":"Common Installation Issues","text":"

Issue: clang: warning: libstdc++ is deprecated; move to libc++ with a minimum deployment target of OS X 10.9 [-Wdeprecated] If you are using MacOS, you might need to update your xcode command line tools and headers

Issue: OSError: Could not find libspatialindex_c library file* Try installing rtree on its own from the Anaconda cloud

conda install rtree\n

Issue: Shapely, a pre-requisite, doesn\u2019t install propertly because it is missing GEOS module Try installing shapely on its own from the Anaconda cloud

conda install shapely\n

Issue: Conda is unable to install a library or to update to a specific library version Try installing libraries from conda-forge

conda install -c conda-forge *library*\n

Issue: User does not have permission to install in directories Try running Anaconda Prompt as an administrator.

"},{"location":"#quickstart","title":"Quickstart","text":"

To get a feel for the API and using project cards, please refer to the \u201cWrangler Quickstart\u201d jupyter notebook.

To start the notebook, open a command line in the network_wrangler top-level directory and type:

jupyter notebook

"},{"location":"#documentation","title":"Documentation","text":"

Documentation can be built from the /docs folder using the command: make html

"},{"location":"#usage","title":"Usage","text":"
import network_wrangler\n\n##todo this is just an example for now\n\nnetwork_wrangler.setup_logging()\n\n## Network Manipulation\nmy_network = network_wrangler.read_roadway_network(...) # returns\nmy_network.apply_project_card(...) # returns\nmy_network.write_roadway_network(...) # returns\n\n## Scenario Building\nmy_scenario = network_wrangler.create_scenario(\n        base_scenario=my_base_scenario,\n        card_search_dir=project_card_directory,\n        tags = [\"baseline-2050\"]\n        )\nmy_scenario.apply_all_projects()\nmy_scenario.write(\"my_project/baseline\", \"baseline-2050\")\nmy_scenario.summarize(outfile=\"scenario_summary_baseline.txt\")\n\nmy_scenario.add_projects_from_files(list_of_build_project_card_files)\nmy_scenario.queued_projects\nmy_scenario.apply_all_projects()\nmy_scenario.write(\"my_project/build\", \"baseline\")\n
"},{"location":"#attribution","title":"Attribution","text":"

This project is built upon the ideas and concepts implemented in the network wrangler project by the San Francisco County Transportation Authority and expanded upon by the Metropolitan Transportation Commission.

While Network Wrangler as written here is based on these concepts, the code is distinct and builds upon other packages such as geopandas and pydantic which hadn\u2019t been implemented when networkwrangler 1.0 was developed.

"},{"location":"#contributing","title":"Contributing","text":"

Pull requests are welcome. Please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate.

"},{"location":"#license","title":"License","text":"

Apache-2.0

"},{"location":"api/","title":"API Documentation","text":""},{"location":"api/#common-usage","title":"Common Usage","text":""},{"location":"api/#base-objects","title":"Base Objects","text":"

Scenario class and related functions for managing a scenario.

Usage:

my_base_year_scenario = {\n    \"road_net\": load_roadway(\n        links_file=STPAUL_LINK_FILE,\n        nodes_file=STPAUL_NODE_FILE,\n        shapes_file=STPAUL_SHAPE_FILE,\n    ),\n    \"transit_net\": load_transit(STPAUL_DIR),\n}\n\n# create a future baseline scenario from base by searching for all cards in dir w/ baseline tag\nproject_card_directory = os.path.join(STPAUL_DIR, \"project_cards\")\nmy_scenario = create_scenario(\n    base_scenario=my_base_year_scenario,\n    card_search_dir=project_card_directory,\n    filter_tags = [ \"baseline2050\" ]\n)\n\n# check project card queue and then apply the projects\nmy_scenario.queued_projects\nmy_scenario.apply_all_projects()\n\n# check applied projects, write it out, and create a summary report.\nmy_scenario.applied_projects\nmy_scenario.write(\"baseline\")\nmy_scenario.summarize(outfile = \"baseline2050summary.txt\")\n\n# Add some projects to create a build scenario based on a list of files.\nbuild_card_filenames = [\n    \"3_multiple_roadway_attribute_change.yml\",\n    \"road.prop_changes.segment.yml\",\n    \"4_simple_managed_lane.yml\",\n]\nmy_scenario.add_projects_from_files(build_card_filenames)\nmy_scenario.write(\"build2050\")\nmy_scenario.summarize(outfile = \"build2050summary.txt\")\n

Roadway Network class and functions for Network Wrangler.

Used to represent a roadway network and perform operations on it.

Usage:

from network_wrangler import load_roadway_from_dir, write_roadway\n\nnet = load_roadway_from_dir(\"my_dir\")\nnet.get_selection({\"links\": [{\"name\": [\"I 35E\"]}]})\nnet.apply(\"my_project_card.yml\")\n\nwrite_roadway(net, \"my_out_prefix\", \"my_dir\", file_format = \"parquet\")\n

TransitNetwork class for representing a transit network.

Transit Networks are represented as a Wrangler-flavored GTFS Feed and optionally mapped to a RoadwayNetwork object. The TransitNetwork object is the primary object for managing transit networks in Wrangler.

Usage:

```python\nimport network_wrangler as wr\nt = wr.load_transit(stpaul_gtfs)\nt.road_net = wr.load_roadway(stpaul_roadway)\nt = t.apply(project_card)\nwrite_transit(t, \"output_dir\")\n```\n
"},{"location":"api/#network_wrangler.scenario.ProjectCardError","title":"ProjectCardError","text":"

Bases: Exception

Raised when a project card is not valid.

Source code in network_wrangler/scenario.py
class ProjectCardError(Exception):\n    \"\"\"Raised when a project card is not valid.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.scenario.Scenario","title":"Scenario","text":"

Bases: object

Holds information about a scenario.

Typical usage example:

my_base_year_scenario = {\n    \"road_net\": load_roadway(\n        links_file=STPAUL_LINK_FILE,\n        nodes_file=STPAUL_NODE_FILE,\n        shapes_file=STPAUL_SHAPE_FILE,\n    ),\n    \"transit_net\": load_transit(STPAUL_DIR),\n}\n\n# create a future baseline scenario from base by searching for all cards in dir w/ baseline tag\nproject_card_directory = os.path.join(STPAUL_DIR, \"project_cards\")\nmy_scenario = create_scenario(\n    base_scenario=my_base_year_scenario,\n    card_search_dir=project_card_directory,\n    filter_tags = [ \"baseline2050\" ]\n)\n\n# check project card queue and then apply the projects\nmy_scenario.queued_projects\nmy_scenario.apply_all_projects()\n\n# check applied projects, write it out, and create a summary report.\nmy_scenario.applied_projects\nmy_scenario.write(\"baseline\")\nmy_scenario.summarize(outfile = \"baseline2050summary.txt\")\n\n# Add some projects to create a build scenario based on a list of files.\nbuild_card_filenames = [\n    \"3_multiple_roadway_attribute_change.yml\",\n    \"road.prop_changes.segment.yml\",\n    \"4_simple_managed_lane.yml\",\n]\nmy_scenario.add_projects_from_files(build_card_filenames)\nmy_scenario.write(\"build2050\")\nmy_scenario.summarize(outfile = \"build2050summary.txt\")\n

Attributes:

Name Type Description base_scenario

dictionary representation of a scenario

road_net Optional[RoadwayNetwork]

instance of RoadwayNetwork for the scenario

transit_net Optional[TransitNetwork]

instance of TransitNetwork for the scenario

project_cards dict[str, ProjectCard]

Mapping[ProjectCard.name,ProjectCard] Storage of all project cards by name.

queued_projects

Projects which are \u201cshovel ready\u201d - have had pre-requisits checked and done any required re-ordering. Similar to a git staging, project cards aren\u2019t recognized in this collecton once they are moved to applied.

applied_projects

list of project names that have been applied

projects

list of all projects either planned, queued, or applied

prerequisites

dictionary storing prerequiste information

corequisites

dictionary storing corequisite information

conflicts

dictionary storing conflict information

Source code in network_wrangler/scenario.py
class Scenario(object):\n    \"\"\"Holds information about a scenario.\n\n    Typical usage example:\n\n    ```python\n    my_base_year_scenario = {\n        \"road_net\": load_roadway(\n            links_file=STPAUL_LINK_FILE,\n            nodes_file=STPAUL_NODE_FILE,\n            shapes_file=STPAUL_SHAPE_FILE,\n        ),\n        \"transit_net\": load_transit(STPAUL_DIR),\n    }\n\n    # create a future baseline scenario from base by searching for all cards in dir w/ baseline tag\n    project_card_directory = os.path.join(STPAUL_DIR, \"project_cards\")\n    my_scenario = create_scenario(\n        base_scenario=my_base_year_scenario,\n        card_search_dir=project_card_directory,\n        filter_tags = [ \"baseline2050\" ]\n    )\n\n    # check project card queue and then apply the projects\n    my_scenario.queued_projects\n    my_scenario.apply_all_projects()\n\n    # check applied projects, write it out, and create a summary report.\n    my_scenario.applied_projects\n    my_scenario.write(\"baseline\")\n    my_scenario.summarize(outfile = \"baseline2050summary.txt\")\n\n    # Add some projects to create a build scenario based on a list of files.\n    build_card_filenames = [\n        \"3_multiple_roadway_attribute_change.yml\",\n        \"road.prop_changes.segment.yml\",\n        \"4_simple_managed_lane.yml\",\n    ]\n    my_scenario.add_projects_from_files(build_card_filenames)\n    my_scenario.write(\"build2050\")\n    my_scenario.summarize(outfile = \"build2050summary.txt\")\n    ```\n\n    Attributes:\n        base_scenario: dictionary representation of a scenario\n        road_net: instance of RoadwayNetwork for the scenario\n        transit_net: instance of TransitNetwork for the scenario\n        project_cards: Mapping[ProjectCard.name,ProjectCard] Storage of all project cards by name.\n        queued_projects: Projects which are \"shovel ready\" - have had pre-requisits checked and\n            done any required re-ordering. Similar to a git staging, project cards aren't\n            recognized in this collecton once they are moved to applied.\n        applied_projects: list of project names that have been applied\n        projects: list of all projects either planned, queued, or applied\n        prerequisites:  dictionary storing prerequiste information\n        corequisites:  dictionary storing corequisite information\n        conflicts: dictionary storing conflict information\n    \"\"\"\n\n    def __init__(\n        self,\n        base_scenario: Union[Scenario, dict],\n        project_card_list: list[ProjectCard] = [],\n        name=\"\",\n    ):\n        \"\"\"Constructor.\n\n        Args:\n        base_scenario: A base scenario object to base this isntance off of, or a dict which\n            describes the scenario attributes including applied projects and respective conflicts.\n            `{\"applied_projects\": [],\"conflicts\":{...}}`\n        project_card_list: Optional list of ProjectCard instances to add to planned projects.\n        name: Optional name for the scenario.\n        \"\"\"\n        WranglerLogger.info(\"Creating Scenario\")\n\n        if isinstance(base_scenario, Scenario):\n            base_scenario = base_scenario.__dict__\n\n        if not set(BASE_SCENARIO_SUGGESTED_PROPS) <= set(base_scenario.keys()):\n            WranglerLogger.warning(\n                f\"Base_scenario doesn't contain {BASE_SCENARIO_SUGGESTED_PROPS}\"\n            )\n\n        self.base_scenario = base_scenario\n        self.name = name\n        # if the base scenario had roadway or transit networks, use them as the basis.\n        self.road_net: Optional[RoadwayNetwork] = copy.deepcopy(self.base_scenario.get(\"road_net\"))\n        self.transit_net: Optional[TransitNetwork] = copy.deepcopy(\n            self.base_scenario.get(\"transit_net\")\n        )\n\n        self.project_cards: dict[str, ProjectCard] = {}\n        self._planned_projects: list[str] = []\n        self._queued_projects = None\n        self.applied_projects = self.base_scenario.get(\"applied_projects\", [])\n\n        self.prerequisites = self.base_scenario.get(\"prerequisites\", {})\n        self.corequisites = self.base_scenario.get(\"corequisites\", {})\n        self.conflicts = self.base_scenario.get(\"conflicts\", {})\n\n        for p in project_card_list:\n            self._add_project(p)\n\n    @property\n    def projects(self):\n        \"\"\"Returns a list of all projects in the scenario: applied and planned.\"\"\"\n        return self.applied_projects + self._planned_projects\n\n    @property\n    def queued_projects(self):\n        \"\"\"Returns a list version of _queued_projects queue.\n\n        Queued projects are thos that have been planned, have all pre-requisites satisfied, and\n        have been ordered based on pre-requisites.\n\n        If no queued projects, will dynamically generate from planned projects based on\n        pre-requisites and return the queue.\n        \"\"\"\n        if not self._queued_projects:\n            self._check_projects_requirements_satisfied(self._planned_projects)\n            self._queued_projects = self.order_projects(self._planned_projects)\n        return list(self._queued_projects)\n\n    def __str__(self):\n        \"\"\"String representation of the Scenario object.\"\"\"\n        s = [\"{}: {}\".format(key, value) for key, value in self.__dict__.items()]\n        return \"\\n\".join(s)\n\n    def _add_dependencies(self, project_name, dependencies: dict) -> None:\n        \"\"\"Add dependencies from a project card to relevant scenario variables.\n\n        Updates existing \"prerequisites\", \"corequisites\" and \"conflicts\".\n        Lowercases everything to enable string matching.\n\n        Args:\n            project_name: name of project you are adding dependencies for.\n            dependencies: Dictionary of depndencies by dependency type and list of associated\n                projects.\n        \"\"\"\n        project_name = project_name.lower()\n\n        for d, v in dependencies.items():\n            _dep = list(map(str.lower, v))\n            WranglerLogger.debug(f\"Adding {_dep} to {project_name} dependency table.\")\n            self.__dict__[d].update({project_name: _dep})\n\n    def _add_project(\n        self,\n        project_card: ProjectCard,\n        validate: bool = True,\n        filter_tags: Collection[str] = [],\n    ) -> None:\n        \"\"\"Adds a single ProjectCard instances to the Scenario.\n\n        Checks that a project of same name is not already in scenario.\n        If selected, will validate ProjectCard before adding.\n        If provided, will only add ProjectCard if it matches at least one filter_tags.\n\n        Resets scenario queued_projects.\n\n        Args:\n            project_card (ProjectCard): ProjectCard instance to add to scenario.\n            validate (bool, optional): If True, will validate the projectcard before\n                being adding it to the scenario. Defaults to True.\n            filter_tags (Collection[str], optional): If used, will only add the project card if\n                its tags match one or more of these filter_tags. Defaults to []\n                which means no tag-filtering will occur.\n\n        \"\"\"\n        project_name = project_card.project.lower()\n        filter_tags = list(map(str.lower, filter_tags))\n\n        if project_name in self.projects:\n            raise ProjectCardError(\n                f\"Names not unique from existing scenario projects: {project_card.project}\"\n            )\n\n        if filter_tags and set(project_card.tags).isdisjoint(set(filter_tags)):\n            WranglerLogger.debug(\n                f\"Skipping {project_name} - no overlapping tags with {filter_tags}.\"\n            )\n            return\n\n        if validate:\n            assert project_card.valid\n\n        WranglerLogger.info(f\"Adding {project_name} to scenario.\")\n        self.project_cards[project_name] = project_card\n        self._planned_projects.append(project_name)\n        self._queued_projects = None\n        self._add_dependencies(project_name, project_card.dependencies)\n\n    def add_project_cards(\n        self,\n        project_card_list: Collection[ProjectCard],\n        validate: bool = True,\n        filter_tags: Collection[str] = [],\n    ) -> None:\n        \"\"\"Adds a list of ProjectCard instances to the Scenario.\n\n        Checks that a project of same name is not already in scenario.\n        If selected, will validate ProjectCard before adding.\n        If provided, will only add ProjectCard if it matches at least one filter_tags.\n\n        Args:\n            project_card_list (Collection[ProjectCard]): List of ProjectCard instances to add to\n                scenario.\n            validate (bool, optional): If True, will require each ProjectCard is validated before\n                being added to scenario. Defaults to True.\n            filter_tags (Collection[str], optional): If used, will filter ProjectCard instances\n                and only add those whose tags match one or more of these filter_tags.\n                Defaults to [] - which means no tag-filtering will occur.\n        \"\"\"\n        for p in project_card_list:\n            self._add_project(p, validate=validate, filter_tags=filter_tags)\n\n    def _check_projects_requirements_satisfied(self, project_list: Collection[str]):\n        \"\"\"Checks all requirements are satisified to apply this specific set of projects.\n\n        Including:\n        1. has an associaed project card\n        2. is in scenario's planned projects\n        3. pre-requisites satisfied\n        4. co-requisies satisfied by applied or co-applied projects\n        5. no conflicing applied or co-applied projects\n\n        Args:\n            project_list: list of projects to check requirements for.\n        \"\"\"\n        self._check_projects_planned(project_list)\n        self._check_projects_have_project_cards(project_list)\n        self._check_projects_prerequisites(project_list)\n        self._check_projects_corequisites(project_list)\n        self._check_projects_conflicts(project_list)\n\n    def _check_projects_planned(self, project_names: Collection[str]) -> None:\n        \"\"\"Checks that a list of projects are in the scenario's planned projects.\"\"\"\n        _missing_ps = [p for p in self._planned_projects if p not in self._planned_projects]\n        if _missing_ps:\n            raise ValueError(\n                f\"Projects are not in planned projects: \\n {_missing_ps}. Add them by \\\n                using add_project_cards(), add_projects_from_files(), or \\\n                add_projects_from_directory().\"\n            )\n\n    def _check_projects_have_project_cards(self, project_list: Collection[str]) -> bool:\n        \"\"\"Checks that a list of projects has an associated project card in the scenario.\"\"\"\n        _missing = [p for p in project_list if p not in self.project_cards]\n        if _missing:\n            WranglerLogger.error(\n                f\"Projects referenced which are missing project cards: {_missing}\"\n            )\n            return False\n        return True\n\n    def _check_projects_prerequisites(self, project_names: list[str]) -> None:\n        \"\"\"Check a list of projects' pre-requisites have been or will be applied to scenario.\"\"\"\n        if set(project_names).isdisjoint(set(self.prerequisites.keys())):\n            return\n        _prereqs = []\n        for p in project_names:\n            _prereqs += self.prerequisites.get(p, [])\n        _projects_applied = self.applied_projects + project_names\n        _missing = list(set(_prereqs) - set(_projects_applied))\n        if _missing:\n            WranglerLogger.debug(\n                f\"project_names: {project_names}\\nprojects_have_or_will_be_applied: \\\n                    {_projects_applied}\\nmissing: {_missing}\"\n            )\n            raise ScenarioPrerequisiteError(f\"Missing {len(_missing)} pre-requisites: {_missing}\")\n\n    def _check_projects_corequisites(self, project_names: list[str]) -> None:\n        \"\"\"Check a list of projects' co-requisites have been or will be applied to scenario.\"\"\"\n        if set(project_names).isdisjoint(set(self.corequisites.keys())):\n            return\n        _coreqs = []\n        for p in project_names:\n            _coreqs += self.corequisites.get(p, [])\n        _projects_applied = self.applied_projects + project_names\n        _missing = list(set(_coreqs) - set(_projects_applied))\n        if _missing:\n            WranglerLogger.debug(\n                f\"project_names: {project_names}\\nprojects_have_or_will_be_applied: \\\n                    {_projects_applied}\\nmissing: {_missing}\"\n            )\n            raise ScenarioCorequisiteError(f\"Missing {len(_missing)} corequisites: {_missing}\")\n\n    def _check_projects_conflicts(self, project_names: list[str]) -> None:\n        \"\"\"Checks that list of projects' conflicts have not been or will be applied to scenario.\"\"\"\n        # WranglerLogger.debug(\"Checking Conflicts...\")\n        projects_to_check = project_names + self.applied_projects\n        # WranglerLogger.debug(f\"\\nprojects_to_check:{projects_to_check}\\nprojects_with_conflicts:{set(self.conflicts.keys())}\")\n        if set(projects_to_check).isdisjoint(set(self.conflicts.keys())):\n            # WranglerLogger.debug(\"Projects have no conflicts to check\")\n            return\n        _conflicts = []\n        for p in project_names:\n            _conflicts += self.conflicts.get(p, [])\n        _conflict_problems = [p for p in _conflicts if p in projects_to_check]\n        if _conflict_problems:\n            WranglerLogger.warning(f\"Conflict Problems: \\n{_conflict_problems}\")\n            _conf_dict = {\n                k: v\n                for k, v in self.conflicts.items()\n                if k in projects_to_check and not set(v).isdisjoint(set(_conflict_problems))\n            }\n            WranglerLogger.debug(f\"Problematic Conflicts: \\n{_conf_dict}\")\n            raise ScenarioConflictError(f\"Found {len(_conflicts)} conflicts: {_conflict_problems}\")\n\n    def order_projects(self, project_list: Collection[str]) -> deque:\n        \"\"\"Orders a list of projects based on moving up pre-requisites into a deque.\n\n        Args:\n            project_list: list of projects to order\n\n        Returns: deque for applying projects.\n        \"\"\"\n        project_list = [p.lower() for p in project_list]\n        assert self._check_projects_have_project_cards(project_list)\n\n        # build prereq (adjacency) list for topological sort\n        adjacency_list = defaultdict(list)\n        visited_list = defaultdict(bool)\n\n        for project in project_list:\n            visited_list[project] = False\n            if not self.prerequisites.get(project):\n                continue\n            for prereq in self.prerequisites[project]:\n                # this will always be true, else would have been flagged in missing \\\n                # prerequsite check, but just in case\n                if prereq.lower() in project_list:\n                    adjacency_list[prereq.lower()] = [project]\n\n        # sorted_project_names is topological sorted project card names (based on prerequsiite)\n        _ordered_projects = topological_sort(\n            adjacency_list=adjacency_list, visited_list=visited_list\n        )\n\n        if not set(_ordered_projects) == set(project_list):\n            _missing = list(set(project_list) - set(_ordered_projects))\n            raise ValueError(f\"Project sort resulted in missing projects: {_missing}\")\n\n        project_deque = deque(_ordered_projects)\n\n        WranglerLogger.debug(f\"Ordered Projects: \\n{project_deque}\")\n\n        return project_deque\n\n    def apply_all_projects(self):\n        \"\"\"Applies all planned projects in the queue.\"\"\"\n        # Call this to make sure projects are appropriately queued in hidden variable.\n        self.queued_projects\n\n        # Use hidden variable.\n        while self._queued_projects:\n            self._apply_project(self._queued_projects.popleft())\n\n        # set this so it will trigger re-queuing any more projects.\n        self._queued_projects = None\n\n    def _apply_change(self, change: Union[ProjectCard, SubProject]) -> None:\n        \"\"\"Applies a specific change specified in a project card.\n\n        Change type must be in at least one of:\n        - ROADWAY_CATEGORIES\n        - TRANSIT_CATEGORIES\n\n        Args:\n            change: a project card or subproject card\n        \"\"\"\n        if change.change_type in ROADWAY_CARD_TYPES:\n            if not self.road_net:\n                raise ValueError(\"Missing Roadway Network\")\n            self.road_net.apply(change)\n        if change.change_type in TRANSIT_CARD_TYPES:\n            if not self.transit_net:\n                raise ValueError(\"Missing Transit Network\")\n            self.transit_net.apply(change)\n        if change.change_type in SECONDARY_TRANSIT_CARD_TYPES and self.transit_net:\n            self.transit_net.apply(change)\n\n        if change.change_type not in TRANSIT_CARD_TYPES + ROADWAY_CARD_TYPES:\n            raise ProjectCardError(\n                f\"Project {change.project}: Don't understand project cat: {change.change_type}\"\n            )\n\n    def _apply_project(self, project_name: str) -> None:\n        \"\"\"Applies project card to scenario.\n\n        If a list of changes is specified in referenced project card, iterates through each change.\n\n        Args:\n            project_name (str): name of project to be applied.\n        \"\"\"\n        project_name = project_name.lower()\n\n        WranglerLogger.info(f\"Applying {project_name}\")\n\n        p = self.project_cards[project_name]\n        WranglerLogger.debug(f\"types: {p.change_types}\")\n        WranglerLogger.debug(f\"type: {p.change_type}\")\n        if p.sub_projects:\n            for sp in p.sub_projects:\n                WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n                self._apply_change(sp)\n\n        else:\n            self._apply_change(p)\n\n        self._planned_projects.remove(project_name)\n        self.applied_projects.append(project_name)\n\n    def apply_projects(self, project_list: Collection[str]):\n        \"\"\"Applies a specific list of projects from the planned project queue.\n\n        Will order the list of projects based on pre-requisites.\n\n        NOTE: does not check co-requisites b/c that isn't possible when applying a sin\n\n        Args:\n            project_list: List of projects to be applied. All need to be in the planned project\n                queue.\n        \"\"\"\n        project_list = [p.lower() for p in project_list]\n\n        self._check_projects_requirements_satisfied(project_list)\n        ordered_project_queue = self.order_projects(project_list)\n\n        while ordered_project_queue:\n            self._apply_project(ordered_project_queue.popleft())\n\n        # Set so that when called again it will retrigger queueing from planned projects.\n        self._ordered_projects = None\n\n    def write(self, path: Union[Path, str], name: str) -> None:\n        \"\"\"_summary_.\n\n        Args:\n            path: Path to write scenario networks and scenario summary to.\n            name: Name to use.\n        \"\"\"\n        if self.road_net:\n            write_roadway(self.road_net, prefix=name, out_dir=path)\n        if self.transit_net:\n            write_transit(self.transit_net, prefix=name, out_dir=path)\n        self.summarize(outfile=os.path.join(path, name))\n\n    def summarize(self, project_detail: bool = True, outfile: str = \"\", mode: str = \"a\") -> str:\n        \"\"\"A high level summary of the created scenario.\n\n        Args:\n            project_detail: If True (default), will write out project card summaries.\n            outfile: If specified, will write scenario summary to text file.\n            mode: Outfile open mode. 'a' to append 'w' to overwrite.\n\n        Returns:\n            string of summary\n\n        \"\"\"\n        return scenario_summary(self, project_detail, outfile, mode)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.projects","title":"projects property","text":"

Returns a list of all projects in the scenario: applied and planned.

"},{"location":"api/#network_wrangler.scenario.Scenario.queued_projects","title":"queued_projects property","text":"

Returns a list version of _queued_projects queue.

Queued projects are thos that have been planned, have all pre-requisites satisfied, and have been ordered based on pre-requisites.

If no queued projects, will dynamically generate from planned projects based on pre-requisites and return the queue.

"},{"location":"api/#network_wrangler.scenario.Scenario.__init__","title":"__init__(base_scenario, project_card_list=[], name='')","text":"

Constructor.

Source code in network_wrangler/scenario.py
def __init__(\n    self,\n    base_scenario: Union[Scenario, dict],\n    project_card_list: list[ProjectCard] = [],\n    name=\"\",\n):\n    \"\"\"Constructor.\n\n    Args:\n    base_scenario: A base scenario object to base this isntance off of, or a dict which\n        describes the scenario attributes including applied projects and respective conflicts.\n        `{\"applied_projects\": [],\"conflicts\":{...}}`\n    project_card_list: Optional list of ProjectCard instances to add to planned projects.\n    name: Optional name for the scenario.\n    \"\"\"\n    WranglerLogger.info(\"Creating Scenario\")\n\n    if isinstance(base_scenario, Scenario):\n        base_scenario = base_scenario.__dict__\n\n    if not set(BASE_SCENARIO_SUGGESTED_PROPS) <= set(base_scenario.keys()):\n        WranglerLogger.warning(\n            f\"Base_scenario doesn't contain {BASE_SCENARIO_SUGGESTED_PROPS}\"\n        )\n\n    self.base_scenario = base_scenario\n    self.name = name\n    # if the base scenario had roadway or transit networks, use them as the basis.\n    self.road_net: Optional[RoadwayNetwork] = copy.deepcopy(self.base_scenario.get(\"road_net\"))\n    self.transit_net: Optional[TransitNetwork] = copy.deepcopy(\n        self.base_scenario.get(\"transit_net\")\n    )\n\n    self.project_cards: dict[str, ProjectCard] = {}\n    self._planned_projects: list[str] = []\n    self._queued_projects = None\n    self.applied_projects = self.base_scenario.get(\"applied_projects\", [])\n\n    self.prerequisites = self.base_scenario.get(\"prerequisites\", {})\n    self.corequisites = self.base_scenario.get(\"corequisites\", {})\n    self.conflicts = self.base_scenario.get(\"conflicts\", {})\n\n    for p in project_card_list:\n        self._add_project(p)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.__str__","title":"__str__()","text":"

String representation of the Scenario object.

Source code in network_wrangler/scenario.py
def __str__(self):\n    \"\"\"String representation of the Scenario object.\"\"\"\n    s = [\"{}: {}\".format(key, value) for key, value in self.__dict__.items()]\n    return \"\\n\".join(s)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.add_project_cards","title":"add_project_cards(project_card_list, validate=True, filter_tags=[])","text":"

Adds a list of ProjectCard instances to the Scenario.

Checks that a project of same name is not already in scenario. If selected, will validate ProjectCard before adding. If provided, will only add ProjectCard if it matches at least one filter_tags.

Parameters:

Name Type Description Default project_card_list Collection[ProjectCard]

List of ProjectCard instances to add to scenario.

required validate bool

If True, will require each ProjectCard is validated before being added to scenario. Defaults to True.

True filter_tags Collection[str]

If used, will filter ProjectCard instances and only add those whose tags match one or more of these filter_tags. Defaults to [] - which means no tag-filtering will occur.

[] Source code in network_wrangler/scenario.py
def add_project_cards(\n    self,\n    project_card_list: Collection[ProjectCard],\n    validate: bool = True,\n    filter_tags: Collection[str] = [],\n) -> None:\n    \"\"\"Adds a list of ProjectCard instances to the Scenario.\n\n    Checks that a project of same name is not already in scenario.\n    If selected, will validate ProjectCard before adding.\n    If provided, will only add ProjectCard if it matches at least one filter_tags.\n\n    Args:\n        project_card_list (Collection[ProjectCard]): List of ProjectCard instances to add to\n            scenario.\n        validate (bool, optional): If True, will require each ProjectCard is validated before\n            being added to scenario. Defaults to True.\n        filter_tags (Collection[str], optional): If used, will filter ProjectCard instances\n            and only add those whose tags match one or more of these filter_tags.\n            Defaults to [] - which means no tag-filtering will occur.\n    \"\"\"\n    for p in project_card_list:\n        self._add_project(p, validate=validate, filter_tags=filter_tags)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.apply_all_projects","title":"apply_all_projects()","text":"

Applies all planned projects in the queue.

Source code in network_wrangler/scenario.py
def apply_all_projects(self):\n    \"\"\"Applies all planned projects in the queue.\"\"\"\n    # Call this to make sure projects are appropriately queued in hidden variable.\n    self.queued_projects\n\n    # Use hidden variable.\n    while self._queued_projects:\n        self._apply_project(self._queued_projects.popleft())\n\n    # set this so it will trigger re-queuing any more projects.\n    self._queued_projects = None\n
"},{"location":"api/#network_wrangler.scenario.Scenario.apply_projects","title":"apply_projects(project_list)","text":"

Applies a specific list of projects from the planned project queue.

Will order the list of projects based on pre-requisites.

NOTE: does not check co-requisites b/c that isn\u2019t possible when applying a sin

Parameters:

Name Type Description Default project_list Collection[str]

List of projects to be applied. All need to be in the planned project queue.

required Source code in network_wrangler/scenario.py
def apply_projects(self, project_list: Collection[str]):\n    \"\"\"Applies a specific list of projects from the planned project queue.\n\n    Will order the list of projects based on pre-requisites.\n\n    NOTE: does not check co-requisites b/c that isn't possible when applying a sin\n\n    Args:\n        project_list: List of projects to be applied. All need to be in the planned project\n            queue.\n    \"\"\"\n    project_list = [p.lower() for p in project_list]\n\n    self._check_projects_requirements_satisfied(project_list)\n    ordered_project_queue = self.order_projects(project_list)\n\n    while ordered_project_queue:\n        self._apply_project(ordered_project_queue.popleft())\n\n    # Set so that when called again it will retrigger queueing from planned projects.\n    self._ordered_projects = None\n
"},{"location":"api/#network_wrangler.scenario.Scenario.order_projects","title":"order_projects(project_list)","text":"

Orders a list of projects based on moving up pre-requisites into a deque.

Parameters:

Name Type Description Default project_list Collection[str]

list of projects to order

required Source code in network_wrangler/scenario.py
def order_projects(self, project_list: Collection[str]) -> deque:\n    \"\"\"Orders a list of projects based on moving up pre-requisites into a deque.\n\n    Args:\n        project_list: list of projects to order\n\n    Returns: deque for applying projects.\n    \"\"\"\n    project_list = [p.lower() for p in project_list]\n    assert self._check_projects_have_project_cards(project_list)\n\n    # build prereq (adjacency) list for topological sort\n    adjacency_list = defaultdict(list)\n    visited_list = defaultdict(bool)\n\n    for project in project_list:\n        visited_list[project] = False\n        if not self.prerequisites.get(project):\n            continue\n        for prereq in self.prerequisites[project]:\n            # this will always be true, else would have been flagged in missing \\\n            # prerequsite check, but just in case\n            if prereq.lower() in project_list:\n                adjacency_list[prereq.lower()] = [project]\n\n    # sorted_project_names is topological sorted project card names (based on prerequsiite)\n    _ordered_projects = topological_sort(\n        adjacency_list=adjacency_list, visited_list=visited_list\n    )\n\n    if not set(_ordered_projects) == set(project_list):\n        _missing = list(set(project_list) - set(_ordered_projects))\n        raise ValueError(f\"Project sort resulted in missing projects: {_missing}\")\n\n    project_deque = deque(_ordered_projects)\n\n    WranglerLogger.debug(f\"Ordered Projects: \\n{project_deque}\")\n\n    return project_deque\n
"},{"location":"api/#network_wrangler.scenario.Scenario.summarize","title":"summarize(project_detail=True, outfile='', mode='a')","text":"

A high level summary of the created scenario.

Parameters:

Name Type Description Default project_detail bool

If True (default), will write out project card summaries.

True outfile str

If specified, will write scenario summary to text file.

'' mode str

Outfile open mode. \u2018a\u2019 to append \u2018w\u2019 to overwrite.

'a'

Returns:

Type Description str

string of summary

Source code in network_wrangler/scenario.py
def summarize(self, project_detail: bool = True, outfile: str = \"\", mode: str = \"a\") -> str:\n    \"\"\"A high level summary of the created scenario.\n\n    Args:\n        project_detail: If True (default), will write out project card summaries.\n        outfile: If specified, will write scenario summary to text file.\n        mode: Outfile open mode. 'a' to append 'w' to overwrite.\n\n    Returns:\n        string of summary\n\n    \"\"\"\n    return scenario_summary(self, project_detail, outfile, mode)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.write","title":"write(path, name)","text":"

summary.

Parameters:

Name Type Description Default path Union[Path, str]

Path to write scenario networks and scenario summary to.

required name str

Name to use.

required Source code in network_wrangler/scenario.py
def write(self, path: Union[Path, str], name: str) -> None:\n    \"\"\"_summary_.\n\n    Args:\n        path: Path to write scenario networks and scenario summary to.\n        name: Name to use.\n    \"\"\"\n    if self.road_net:\n        write_roadway(self.road_net, prefix=name, out_dir=path)\n    if self.transit_net:\n        write_transit(self.transit_net, prefix=name, out_dir=path)\n    self.summarize(outfile=os.path.join(path, name))\n
"},{"location":"api/#network_wrangler.scenario.ScenarioConflictError","title":"ScenarioConflictError","text":"

Bases: Exception

Raised when a conflict is detected.

Source code in network_wrangler/scenario.py
class ScenarioConflictError(Exception):\n    \"\"\"Raised when a conflict is detected.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.scenario.ScenarioCorequisiteError","title":"ScenarioCorequisiteError","text":"

Bases: Exception

Raised when a co-requisite is not satisfied.

Source code in network_wrangler/scenario.py
class ScenarioCorequisiteError(Exception):\n    \"\"\"Raised when a co-requisite is not satisfied.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.scenario.ScenarioPrerequisiteError","title":"ScenarioPrerequisiteError","text":"

Bases: Exception

Raised when a pre-requisite is not satisfied.

Source code in network_wrangler/scenario.py
class ScenarioPrerequisiteError(Exception):\n    \"\"\"Raised when a pre-requisite is not satisfied.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.scenario.create_base_scenario","title":"create_base_scenario(base_shape_name, base_link_name, base_node_name, roadway_dir='', transit_dir='')","text":"

Creates a base scenario dictionary from roadway and transit network files.

Parameters:

Name Type Description Default base_shape_name str

filename of the base network shape

required base_link_name str

filename of the base network link

required base_node_name str

filename of the base network node

required roadway_dir str

optional path to the base scenario roadway network files

'' transit_dir str

optional path to base scenario transit files

'' Source code in network_wrangler/scenario.py
def create_base_scenario(\n    base_shape_name: str,\n    base_link_name: str,\n    base_node_name: str,\n    roadway_dir: str = \"\",\n    transit_dir: str = \"\",\n) -> dict:\n    \"\"\"Creates a base scenario dictionary from roadway and transit network files.\n\n    Args:\n        base_shape_name: filename of the base network shape\n        base_link_name: filename of the base network link\n        base_node_name: filename of the base network node\n        roadway_dir: optional path to the base scenario roadway network files\n        transit_dir: optional path to base scenario transit files\n    \"\"\"\n    if roadway_dir:\n        base_network_shape_file = os.path.join(roadway_dir, base_shape_name)\n        base_network_link_file = os.path.join(roadway_dir, base_link_name)\n        base_network_node_file = os.path.join(roadway_dir, base_node_name)\n    else:\n        base_network_shape_file = base_shape_name\n        base_network_link_file = base_link_name\n        base_network_node_file = base_node_name\n\n    road_net = load_roadway(\n        links_file=base_network_link_file,\n        nodes_file=base_network_node_file,\n        shapes_file=base_network_shape_file,\n    )\n\n    if transit_dir:\n        transit_net = load_transit(transit_dir)\n        transit_net.road_net = road_net\n    else:\n        transit_net = None\n        WranglerLogger.info(\n            \"No transit directory specified, base scenario will have empty transit network.\"\n        )\n\n    base_scenario = {\"road_net\": road_net, \"transit_net\": transit_net}\n\n    return base_scenario\n
"},{"location":"api/#network_wrangler.scenario.create_scenario","title":"create_scenario(base_scenario={}, project_card_list=[], project_card_filepath=None, filter_tags=[], validate=True)","text":"

Creates scenario from a base scenario and adds project cards.

Project cards can be added using any/all of the following methods: 1. List of ProjectCard instances 2. List of ProjectCard files 3. Directory and optional glob search to find project card files in

Checks that a project of same name is not already in scenario. If selected, will validate ProjectCard before adding. If provided, will only add ProjectCard if it matches at least one filter_tags.

Parameters:

Name Type Description Default base_scenario Union[Scenario, dict]

base Scenario scenario instances of dictionary of attributes.

{} project_card_list

List of ProjectCard instances to create Scenario from. Defaults to [].

[] project_card_filepath Optional[Union[Collection[str], str]]

where the project card is. A single path, list of paths,

None filter_tags Collection[str]

If used, will only add the project card if its tags match one or more of these filter_tags. Defaults to [] which means no tag-filtering will occur.

[] validate bool

If True, will validate the projectcard before being adding it to the scenario. Defaults to True.

True Source code in network_wrangler/scenario.py
def create_scenario(\n    base_scenario: Union[Scenario, dict] = {},\n    project_card_list=[],\n    project_card_filepath: Optional[Union[Collection[str], str]] = None,\n    filter_tags: Collection[str] = [],\n    validate=True,\n) -> Scenario:\n    \"\"\"Creates scenario from a base scenario and adds project cards.\n\n    Project cards can be added using any/all of the following methods:\n    1. List of ProjectCard instances\n    2. List of ProjectCard files\n    3. Directory and optional glob search to find project card files in\n\n    Checks that a project of same name is not already in scenario.\n    If selected, will validate ProjectCard before adding.\n    If provided, will only add ProjectCard if it matches at least one filter_tags.\n\n    Args:\n        base_scenario: base Scenario scenario instances of dictionary of attributes.\n        project_card_list: List of ProjectCard instances to create Scenario from. Defaults\n            to [].\n        project_card_filepath: where the project card is.  A single path, list of paths,\n        a directory, or a glob pattern. Defaults to None.\n        filter_tags (Collection[str], optional): If used, will only add the project card if\n            its tags match one or more of these filter_tags. Defaults to []\n            which means no tag-filtering will occur.\n        validate (bool, optional): If True, will validate the projectcard before\n            being adding it to the scenario. Defaults to True.\n    \"\"\"\n    scenario = Scenario(base_scenario)\n\n    if project_card_filepath:\n        project_card_list += list(\n            read_cards(project_card_filepath, filter_tags=filter_tags).values()\n        )\n\n    if project_card_list:\n        scenario.add_project_cards(project_card_list, filter_tags=filter_tags, validate=validate)\n\n    return scenario\n
"},{"location":"api/#network_wrangler.scenario.scenario_summary","title":"scenario_summary(scenario, project_detail=True, outfile='', mode='a')","text":"

A high level summary of the created scenario.

Parameters:

Name Type Description Default scenario Scenario

Scenario instance to summarize.

required project_detail bool

If True (default), will write out project card summaries.

True outfile str

If specified, will write scenario summary to text file.

'' mode str

Outfile open mode. \u2018a\u2019 to append \u2018w\u2019 to overwrite.

'a'

Returns:

Type Description str

string of summary

Source code in network_wrangler/scenario.py
def scenario_summary(\n    scenario: Scenario, project_detail: bool = True, outfile: str = \"\", mode: str = \"a\"\n) -> str:\n    \"\"\"A high level summary of the created scenario.\n\n    Args:\n        scenario: Scenario instance to summarize.\n        project_detail: If True (default), will write out project card summaries.\n        outfile: If specified, will write scenario summary to text file.\n        mode: Outfile open mode. 'a' to append 'w' to overwrite.\n\n    Returns:\n        string of summary\n    \"\"\"\n    WranglerLogger.info(f\"Summarizing Scenario {scenario.name}\")\n    report_str = \"------------------------------\\n\"\n    report_str += f\"Scenario created on {datetime.now()}\\n\"\n\n    report_str += \"Base Scenario:\\n\"\n    report_str += \"--Road Network:\\n\"\n    report_str += f\"----Link File: {scenario.base_scenario['road_net']._links_file}\\n\"\n    report_str += f\"----Node File: {scenario.base_scenario['road_net']._nodes_file}\\n\"\n    report_str += f\"----Shape File: {scenario.base_scenario['road_net']._shapes_file}\\n\"\n    report_str += \"--Transit Network:\\n\"\n    report_str += f\"----Feed Path: {scenario.base_scenario['transit_net'].feed.feed_path}\\n\"\n\n    report_str += \"\\nProject Cards:\\n -\"\n    report_str += \"\\n-\".join([str(pc.file) for p, pc in scenario.project_cards.items()])\n\n    report_str += \"\\nApplied Projects:\\n-\"\n    report_str += \"\\n-\".join(scenario.applied_projects)\n\n    if project_detail:\n        report_str += \"\\n---Project Card Details---\\n\"\n        for p in scenario.project_cards:\n            report_str += \"\\n{}\".format(\n                pprint.pformat(\n                    [scenario.project_cards[p].__dict__ for p in scenario.applied_projects]\n                )\n            )\n\n    if outfile:\n        with open(outfile, mode) as f:\n            f.write(report_str)\n        WranglerLogger.info(f\"Wrote Scenario Report to: {outfile}\")\n\n    return report_str\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork","title":"RoadwayNetwork","text":"

Bases: BaseModel

Representation of a Roadway Network.

Typical usage example:

net = load_roadway(\n    links_file=MY_LINK_FILE,\n    nodes_file=MY_NODE_FILE,\n    shapes_file=MY_SHAPE_FILE,\n)\nmy_selection = {\n    \"link\": [{\"name\": [\"I 35E\"]}],\n    \"A\": {\"osm_node_id\": \"961117623\"},  # start searching for segments at A\n    \"B\": {\"osm_node_id\": \"2564047368\"},\n}\nnet.get_selection(my_selection)\n\nmy_change = [\n    {\n        'property': 'lanes',\n        'existing': 1,\n        'set': 2,\n    },\n    {\n        'property': 'drive_access',\n        'set': 0,\n    },\n]\n\nmy_net.apply_roadway_feature_change(\n    my_net.get_selection(my_selection),\n    my_change\n)\n\n    net.model_net\n    net.is_network_connected(mode=\"drive\", nodes=self.m_nodes_df, links=self.m_links_df)\n    _, disconnected_nodes = net.assess_connectivity(\n        mode=\"walk\",\n        ignore_end_nodes=True,\n        nodes=self.m_nodes_df,\n        links=self.m_links_df\n    )\n    write_roadway(net,filename=my_out_prefix, path=my_dir, for_model = True)\n

Attributes:

Name Type Description nodes_df RoadNodesTable

dataframe of of node records.

links_df RoadLinksTable

dataframe of link records and associated properties.

shapes_df RoadShapestable

data from of detailed shape records This is lazily created iff it is called because shapes files can be expensive to read.

selections dict

dictionary of stored roadway selection objects, mapped by RoadwayLinkSelection.sel_key or RoadwayNodeSelection.sel_key in case they are made repeatedly.

crs str

coordinate reference system in ESPG number format. Defaults to DEFAULT_CRS which is set to 4326, WGS 84 Lat/Long

network_hash str

dynamic property of the hashed value of links_df and nodes_df. Used for quickly identifying if a network has changed since various expensive operations have taken place (i.e. generating a ModelRoadwayNetwork or a network graph)

model_net ModelRoadwayNetwork

referenced ModelRoadwayNetwork object which will be lazily created if None or if the network_hash has changed.

Source code in network_wrangler/roadway/network.py
class RoadwayNetwork(BaseModel):\n    \"\"\"Representation of a Roadway Network.\n\n    Typical usage example:\n\n    ```py\n    net = load_roadway(\n        links_file=MY_LINK_FILE,\n        nodes_file=MY_NODE_FILE,\n        shapes_file=MY_SHAPE_FILE,\n    )\n    my_selection = {\n        \"link\": [{\"name\": [\"I 35E\"]}],\n        \"A\": {\"osm_node_id\": \"961117623\"},  # start searching for segments at A\n        \"B\": {\"osm_node_id\": \"2564047368\"},\n    }\n    net.get_selection(my_selection)\n\n    my_change = [\n        {\n            'property': 'lanes',\n            'existing': 1,\n            'set': 2,\n        },\n        {\n            'property': 'drive_access',\n            'set': 0,\n        },\n    ]\n\n    my_net.apply_roadway_feature_change(\n        my_net.get_selection(my_selection),\n        my_change\n    )\n\n        net.model_net\n        net.is_network_connected(mode=\"drive\", nodes=self.m_nodes_df, links=self.m_links_df)\n        _, disconnected_nodes = net.assess_connectivity(\n            mode=\"walk\",\n            ignore_end_nodes=True,\n            nodes=self.m_nodes_df,\n            links=self.m_links_df\n        )\n        write_roadway(net,filename=my_out_prefix, path=my_dir, for_model = True)\n    ```\n\n    Attributes:\n        nodes_df (RoadNodesTable): dataframe of of node records.\n        links_df (RoadLinksTable): dataframe of link records and associated properties.\n        shapes_df (RoadShapestable): data from of detailed shape records  This is lazily\n            created iff it is called because shapes files can be expensive to read.\n        selections (dict): dictionary of stored roadway selection objects, mapped by\n            `RoadwayLinkSelection.sel_key` or `RoadwayNodeSelection.sel_key` in case they are\n                made repeatedly.\n        crs (str): coordinate reference system in ESPG number format. Defaults to DEFAULT_CRS\n            which is set to 4326, WGS 84 Lat/Long\n        network_hash: dynamic property of the hashed value of links_df and nodes_df. Used for\n            quickly identifying if a network has changed since various expensive operations have\n            taken place (i.e. generating a ModelRoadwayNetwork or a network graph)\n        model_net (ModelRoadwayNetwork): referenced `ModelRoadwayNetwork` object which will be\n            lazily created if None or if the `network_hash` has changed.\n    \"\"\"\n\n    crs: Literal[LAT_LON_CRS] = LAT_LON_CRS\n    nodes_df: DataFrame[RoadNodesTable]\n    links_df: DataFrame[RoadLinksTable]\n    _shapes_df: Optional[DataFrame[RoadShapesTable]] = None\n\n    _links_file: Optional[Path] = None\n    _nodes_file: Optional[Path] = None\n    _shapes_file: Optional[Path] = None\n\n    _shapes_params: ShapesParams = ShapesParams()\n    _model_net: Optional[ModelRoadwayNetwork] = None\n    _selections: dict[str, Selections] = {}\n    _modal_graphs: dict[str, dict] = defaultdict(lambda: {\"graph\": None, \"hash\": None})\n\n    @field_validator(\"nodes_df\", \"links_df\")\n    def coerce_crs(cls, v, info):\n        \"\"\"Coerce crs of nodes_df and links_df to network crs.\"\"\"\n        net_crs = info.data[\"crs\"]\n        if v.crs != net_crs:\n            WranglerLogger.warning(\n                f\"CRS of links_df ({v.crs}) doesn't match network crs {net_crs}. \\\n                    Changing to network crs.\"\n            )\n            v.to_crs(net_crs)\n        return v\n\n    @property\n    def shapes_df(self) -> DataFrame[RoadShapesTable]:\n        \"\"\"Load and return RoadShapesTable.\n\n        If not already loaded, will read from shapes_file and return. If shapes_file is None,\n        will return an empty dataframe with the right schema. If shapes_df is already set, will\n        return that.\n        \"\"\"\n        if (self._shapes_df is None or self._shapes_df.empty) and self._shapes_file is not None:\n            self._shapes_df = read_shapes(\n                self._shapes_file,\n                in_crs=self.crs,\n                shapes_params=self._shapes_params,\n            )\n        # if there is NONE, then at least create an empty dataframe with right schema\n        elif self._shapes_df is None:\n            self._shapes_df = empty_df_from_datamodel(RoadShapesTable, crs=self.crs)\n            self._shapes_df.set_index(\"shape_id_idx\", inplace=True)\n\n        return self._shapes_df\n\n    @shapes_df.setter\n    def shapes_df(self, value):\n        self._shapes_df = df_to_shapes_df(value, shapes_params=self._shapes_params)\n\n    @property\n    def network_hash(self) -> str:\n        \"\"\"Hash of the links and nodes dataframes.\"\"\"\n        _value = str.encode(self.links_df.df_hash() + \"-\" + self.nodes_df.df_hash())\n\n        _hash = hashlib.sha256(_value).hexdigest()\n        return _hash\n\n    @property\n    def model_net(self) -> ModelRoadwayNetwork:\n        \"\"\"Return a ModelRoadwayNetwork object for this network.\"\"\"\n        if self._model_net is None or self._model_net._net_hash != self.network_hash:\n            self._model_net = ModelRoadwayNetwork(self)\n        return self._model_net\n\n    @property\n    def summary(self) -> dict:\n        \"\"\"Quick summary dictionary of number of links, nodes.\"\"\"\n        d = {\n            \"links\": len(self.links_df),\n            \"nodes\": len(self.nodes_df),\n        }\n        return d\n\n    @property\n    def link_shapes_df(self) -> gpd.GeoDataFrame:\n        \"\"\"Add shape geometry to links if available.\n\n        returns: shapes merged to nodes dataframe\n        \"\"\"\n        _links_df = copy.deepcopy(self.links_df)\n        link_shapes_df = _links_df.merge(\n            self.shapes_df,\n            left_on=self.links_df.params.fk_to_shape,\n            right_on=self.shapes_df.params.primary_key,\n            how=\"left\",\n        )\n        return link_shapes_df\n\n    def get_property_by_timespan_and_group(\n        self,\n        link_property: str,\n        category: Union[str, int] = DEFAULT_CATEGORY,\n        timespan: TimespanString = DEFAULT_TIMESPAN,\n        strict_timespan_match: bool = False,\n        min_overlap_minutes: int = 60,\n    ) -> Any:\n        \"\"\"Returns a new dataframe with model_link_id and link property by category and timespan.\n\n        Convenience method for backward compatability.\n\n        Args:\n            link_property: link property to query\n            category: category to query or a list of categories. Defaults to DEFAULT_CATEGORY.\n            timespan: timespan to query in the form of [\"HH:MM\",\"HH:MM\"].\n                Defaults to DEFAULT_TIMESPAN.\n            strict_timespan_match: If True, will only return links that match the timespan exactly.\n                Defaults to False.\n            min_overlap_minutes: If strict_timespan_match is False, will return links that overlap\n                with the timespan by at least this many minutes. Defaults to 60.\n        \"\"\"\n        from .links.scopes import prop_for_scope\n\n        return prop_for_scope(\n            self.links_df,\n            link_property,\n            timespan=timespan,\n            category=category,\n            strict_timespan_match=strict_timespan_match,\n            min_overlap_minutes=min_overlap_minutes,\n        )\n\n    def get_selection(\n        self,\n        selection_dict: Union[dict, SelectFacility],\n        overwrite: bool = False,\n    ) -> Union[RoadwayNodeSelection, RoadwayLinkSelection]:\n        \"\"\"Return selection if it already exists, otherwise performs selection.\n\n        Args:\n            selection_dict (dict): SelectFacility dictionary.\n            overwrite: if True, will overwrite any previously cached searches. Defaults to False.\n        \"\"\"\n        key = _create_selection_key(selection_dict)\n        if (key in self._selections) and not overwrite:\n            WranglerLogger.debug(f\"Using cached selection from key: {key}\")\n            return self._selections[key]\n\n        if isinstance(selection_dict, SelectFacility):\n            selection_data = selection_dict\n        elif isinstance(selection_dict, SelectLinksDict):\n            selection_data = SelectFacility(links=selection_dict)\n        elif isinstance(selection_dict, SelectNodesDict):\n            selection_data = SelectFacility(nodes=selection_dict)\n        elif isinstance(selection_dict, dict):\n            selection_data = SelectFacility(**selection_dict)\n        else:\n            WranglerLogger.error(f\"`selection_dict` arg must be a dictionary or SelectFacility model.\\\n                             Received: {selection_dict} of type {type(selection_dict)}\")\n            raise SelectionError(\"selection_dict arg must be a dictionary or SelectFacility model\")\n\n        WranglerLogger.debug(f\"Getting selection from key: {key}\")\n        if selection_data.feature_types in [\"links\", \"segment\"]:\n            return RoadwayLinkSelection(self, selection_dict)\n        elif selection_data.feature_types == \"nodes\":\n            return RoadwayNodeSelection(self, selection_dict)\n        else:\n            WranglerLogger.error(\"Selection data should be of type 'segment', 'links' or 'nodes'.\")\n            raise SelectionError(\"Selection data should be of type 'segment', 'links' or 'nodes'.\")\n\n    def modal_graph_hash(self, mode) -> str:\n        \"\"\"Hash of the links in order to detect a network change from when graph created.\"\"\"\n        _value = str.encode(self.links_df.df_hash() + \"-\" + mode)\n        _hash = hashlib.sha256(_value).hexdigest()\n\n        return _hash\n\n    def get_modal_graph(self, mode) -> MultiDiGraph:\n        \"\"\"Return a networkx graph of the network for a specific mode.\n\n        Args:\n            mode: mode of the network, one of `drive`,`transit`,`walk`, `bike`\n        \"\"\"\n        from .graph import net_to_graph\n\n        if self._modal_graphs[mode][\"hash\"] != self.modal_graph_hash(mode):\n            self._modal_graphs[mode][\"graph\"] = net_to_graph(self, mode)\n\n        return self._modal_graphs[mode][\"graph\"]\n\n    def apply(self, project_card: Union[ProjectCard, dict]) -> RoadwayNetwork:\n        \"\"\"Wrapper method to apply a roadway project, returning a new RoadwayNetwork instance.\n\n        Args:\n            project_card: either a dictionary of the project card object or ProjectCard instance\n        \"\"\"\n        if not (isinstance(project_card, ProjectCard) or isinstance(project_card, SubProject)):\n            project_card = ProjectCard(project_card)\n\n        project_card.validate()\n\n        if project_card.sub_projects:\n            for sp in project_card.sub_projects:\n                WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n                self._apply_change(sp)\n            return self\n        else:\n            return self._apply_change(project_card)\n\n    def _apply_change(self, change: Union[ProjectCard, SubProject]) -> RoadwayNetwork:\n        \"\"\"Apply a single change: a single-project project or a sub-project.\"\"\"\n        if not isinstance(change, SubProject):\n            WranglerLogger.info(f\"Applying Project to Roadway Network: {change.project}\")\n\n        if change.change_type == \"roadway_property_change\":\n            return apply_roadway_property_change(\n                self,\n                self.get_selection(change.facility),\n                change.roadway_property_change[\"property_changes\"],\n            )\n\n        elif change.change_type == \"roadway_addition\":\n            return apply_new_roadway(\n                self,\n                change.roadway_addition,\n            )\n\n        elif change.change_type == \"roadway_deletion\":\n            return apply_roadway_deletion(\n                self,\n                change.roadway_deletion,\n            )\n\n        elif change.change_type == \"pycode\":\n            return apply_calculated_roadway(self, change.pycode)\n        else:\n            WranglerLogger.error(f\"Couldn't find project in: \\n{change.__dict__}\")\n            raise (ValueError(f\"Invalid Project Card Category: {change.change_type}\"))\n\n    def links_with_link_ids(self, link_ids: List[int]) -> DataFrame[RoadLinksTable]:\n        \"\"\"Return subset of links_df based on link_ids list.\"\"\"\n        return filter_links_to_ids(self.links_df, link_ids)\n\n    def links_with_nodes(self, node_ids: List[int]) -> DataFrame[RoadLinksTable]:\n        \"\"\"Return subset of links_df based on node_ids list.\"\"\"\n        return filter_links_to_node_ids(self.links_df, node_ids)\n\n    def nodes_in_links(self) -> DataFrame[RoadNodesTable]:\n        \"\"\"Returns subset of self.nodes_df that are in self.links_df.\"\"\"\n        return filter_nodes_to_links(self.links_df, self.nodes_df)\n\n    def add_links(self, add_links_df: Union[pd.DataFrame, DataFrame[RoadLinksTable]]):\n        \"\"\"Validate combined links_df with LinksSchema before adding to self.links_df.\n\n        Args:\n            add_links_df: Dataframe of additional links to add.\n        \"\"\"\n        if not isinstance(add_links_df, RoadLinksTable):\n            add_links_df = data_to_links_df(add_links_df, nodes_df=self.nodes_df)\n        self.links_df = RoadLinksTable(pd.concat([self.links_df, add_links_df]))\n\n    def add_nodes(self, add_nodes_df: Union[pd.DataFrame, DataFrame[RoadNodesTable]]):\n        \"\"\"Validate combined nodes_df with NodesSchema before adding to self.nodes_df.\n\n        Args:\n            add_nodes_df: Dataframe of additional nodes to add.\n        \"\"\"\n        if not isinstance(add_nodes_df, RoadNodesTable):\n            add_nodes_df = data_to_nodes_df(add_nodes_df)\n        self.nodes_df = RoadNodesTable(pd.concat([self.nodes_df, add_nodes_df]))\n\n    def add_shapes(self, add_shapes_df: Union[pd.DataFrame, DataFrame[RoadShapesTable]]):\n        \"\"\"Validate combined shapes_df with RoadShapesTable efore adding to self.shapes_df.\n\n        Args:\n            add_shapes_df: Dataframe of additional shapes to add.\n        \"\"\"\n        if not isinstance(add_shapes_df, RoadShapesTable):\n            add_shapes_df = df_to_shapes_df(add_shapes_df)\n        WranglerLogger.debug(f\"add_shapes_df: \\n{add_shapes_df}\")\n        WranglerLogger.debug(f\"self.shapes_df: \\n{self.shapes_df}\")\n        together_df = pd.concat([self.shapes_df, add_shapes_df])\n        WranglerLogger.debug(f\"together_df: \\n{together_df}\")\n        self.shapes_df = RoadShapesTable(pd.concat([self.shapes_df, add_shapes_df]))\n\n    def delete_links(\n        self,\n        selection_dict: SelectLinksDict,\n        clean_nodes: bool = False,\n        clean_shapes: bool = False,\n    ):\n        \"\"\"Deletes links based on selection dictionary and optionally associated nodes and shapes.\n\n        Args:\n            selection_dict (SelectLinks): Dictionary describing link selections as follows:\n                `all`: Optional[bool] = False. If true, will select all.\n                `name`: Optional[list[str]]\n                `ref`: Optional[list[str]]\n                `osm_link_id`:Optional[list[str]]\n                `model_link_id`: Optional[list[int]]\n                `modes`: Optional[list[str]]. Defaults to \"any\"\n                `ignore_missing`: if true, will not error when defaults to True.\n                ...plus any other link property to select on top of these.\n            clean_nodes (bool, optional): If True, will clean nodes uniquely associated with\n                deleted links. Defaults to False.\n            clean_shapes (bool, optional): If True, will clean nodes uniquely associated with\n                deleted links. Defaults to False.\n        \"\"\"\n        selection_dict = SelectLinksDict(**selection_dict).model_dump(\n            exclude_none=True, by_alias=True\n        )\n        selection = self.get_selection({\"links\": selection_dict})\n\n        if clean_nodes:\n            node_ids_to_delete = node_ids_unique_to_link_ids(\n                selection.selected_links, selection.selected_links_df, self.nodes_df\n            )\n            WranglerLogger.debug(\n                f\"Dropping nodes associated with dropped links: \\n{node_ids_to_delete}\"\n            )\n            self.nodes_df = delete_nodes_by_ids(self.nodes_df, del_node_ids=node_ids_to_delete)\n\n        if clean_shapes:\n            shape_ids_to_delete = shape_ids_unique_to_link_ids(\n                selection.selected_links, selection.selected_links_df, self.shapes_df\n            )\n            WranglerLogger.debug(\n                f\"Dropping shapes associated with dropped links: \\n{shape_ids_to_delete}\"\n            )\n            self.shapes_df = delete_shapes_by_ids(\n                self.shapes_df, del_shape_ids=shape_ids_to_delete\n            )\n\n        self.links_df = delete_links_by_ids(\n            self.links_df,\n            selection.selected_links,\n            ignore_missing=selection.ignore_missing,\n        )\n\n    def delete_nodes(\n        self,\n        selection_dict: Union[dict, SelectNodesDict],\n        remove_links: bool = False,\n    ) -> None:\n        \"\"\"Deletes nodes from roadway network. Wont delete nodes used by links in network.\n\n        Args:\n            selection_dict: dictionary of node selection criteria in the form of a SelectNodesDict.\n            remove_links: if True, will remove any links that are associated with the nodes.\n                If False, will only remove nodes if they are not associated with any links.\n                Defaults to False.\n\n        raises:\n            NodeDeletionError: If not ignore_missing and selected nodes to delete aren't in network\n        \"\"\"\n        if not isinstance(selection_dict, SelectNodesDict):\n            selection_dict = SelectNodesDict(**selection_dict)\n        selection_dict = selection_dict.model_dump(exclude_none=True, by_alias=True)\n        selection: RoadwayNodeSelection = self.get_selection(\n            {\"nodes\": selection_dict},\n        )\n        if remove_links:\n            del_node_ids = selection.selected_nodes\n            link_ids = self.links_with_nodes(selection.selected_nodes).model_link_id.to_list()\n            WranglerLogger.info(f\"Removing {len(link_ids)} links associated with nodes.\")\n            self.delete_links({\"model_link_id\": link_ids})\n        else:\n            unused_node_ids = node_ids_without_links(self.nodes_df, self.links_df)\n            del_node_ids = list(set(selection.selected_nodes).intersection(unused_node_ids))\n\n        self.nodes_df = delete_nodes_by_ids(\n            self.nodes_df, del_node_ids, ignore_missing=selection.ignore_missing\n        )\n\n    def clean_unused_shapes(self):\n        \"\"\"Removes any unused shapes from network that aren't referenced by links_df.\"\"\"\n        from .shapes.shapes import shape_ids_without_links\n\n        del_shape_ids = shape_ids_without_links(self.shapes_df, self.links_df)\n        self.shapes_df = self.shapes_df.drop(del_shape_ids)\n\n    def clean_unused_nodes(self):\n        \"\"\"Removes any unused nodes from network that aren't referenced by links_df.\n\n        NOTE: does not check if these nodes are used by transit, so use with caution.\n        \"\"\"\n        from .nodes.nodes import node_ids_without_links\n\n        node_ids = node_ids_without_links(self.nodes_df, self.links_df)\n        self.nodes_df = self.nodes_df.drop(node_ids)\n\n    def move_nodes(\n        self,\n        node_geometry_change_table: DataFrame[NodeGeometryChangeTable],\n    ):\n        \"\"\"Moves nodes based on updated geometry along with associated links and shape geometry.\n\n        Args:\n            node_geometry_change_table: a table with model_node_id, X, Y, and CRS.\n        \"\"\"\n        node_geometry_change_table = NodeGeometryChangeTable(node_geometry_change_table)\n        node_ids = node_geometry_change_table.model_node_id.to_list()\n        WranglerLogger.debug(f\"Moving nodes: {node_ids}\")\n        self.nodes_df = edit_node_geometry(self.nodes_df, node_geometry_change_table)\n        self.links_df = edit_link_geometry_from_nodes(self.links_df, self.nodes_df, node_ids)\n        self.shapes_df = edit_shape_geometry_from_nodes(\n            self.shapes_df, self.links_df, self.nodes_df, node_ids\n        )\n\n    def has_node(self, model_node_id: int) -> bool:\n        \"\"\"Queries if network has node based on model_node_id.\n\n        Args:\n            model_node_id: model_node_id to check for.\n        \"\"\"\n        has_node = self.nodes_df[self.nodes_df.model_node_id].isin([model_node_id]).any()\n\n        return has_node\n\n    def has_link(self, ab: tuple) -> bool:\n        \"\"\"Returns true if network has links with AB values.\n\n        Args:\n            ab: Tuple of values corresponding with A and B.\n        \"\"\"\n        sel_a, sel_b = ab\n        has_link = self.links_df[self.links_df[[\"A\", \"B\"]]].isin({\"A\": sel_a, \"B\": sel_b}).any()\n        return has_link\n\n    def is_connected(self, mode: str) -> bool:\n        \"\"\"Determines if the network graph is \"strongly\" connected.\n\n        A graph is strongly connected if each vertex is reachable from every other vertex.\n\n        Args:\n            mode:  mode of the network, one of `drive`,`transit`,`walk`, `bike`\n        \"\"\"\n        is_connected = nx.is_strongly_connected(self.get_modal_graph(mode))\n\n        return is_connected\n\n    @staticmethod\n    def add_incident_link_data_to_nodes(\n        links_df: Optional[DataFrame[RoadLinksTable]] = None,\n        nodes_df: Optional[DataFrame[RoadNodesTable]] = None,\n        link_variables: list = [],\n    ) -> DataFrame[RoadNodesTable]:\n        \"\"\"Add data from links going to/from nodes to node.\n\n        Args:\n            links_df: if specified, will assess connectivity of this\n                links list rather than self.links_df\n            nodes_df: if specified, will assess connectivity of this\n                nodes list rather than self.nodes_df\n            link_variables: list of columns in links dataframe to add to incident nodes\n\n        Returns:\n            nodes DataFrame with link data where length is N*number of links going in/out\n        \"\"\"\n        WranglerLogger.debug(\"Adding following link data to nodes: \".format())\n\n        _link_vals_to_nodes = [x for x in link_variables if x in links_df.columns]\n        if link_variables not in _link_vals_to_nodes:\n            WranglerLogger.warning(\n                \"Following columns not in links_df and wont be added to nodes: {} \".format(\n                    list(set(link_variables) - set(_link_vals_to_nodes))\n                )\n            )\n\n        _nodes_from_links_A = nodes_df.merge(\n            links_df[[links_df.params.from_node] + _link_vals_to_nodes],\n            how=\"outer\",\n            left_on=nodes_df.params.primary_key,\n            right_on=links_df.params.from_node,\n        )\n        _nodes_from_links_B = nodes_df.merge(\n            links_df[[links_df.params.to_node] + _link_vals_to_nodes],\n            how=\"outer\",\n            left_on=nodes_df.params.primary_key,\n            right_on=links_df.params.to_node,\n        )\n        _nodes_from_links_ab = pd.concat([_nodes_from_links_A, _nodes_from_links_B])\n\n        return _nodes_from_links_ab\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.link_shapes_df","title":"link_shapes_df: gpd.GeoDataFrame property","text":"

Add shape geometry to links if available.

returns: shapes merged to nodes dataframe

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.model_net","title":"model_net: ModelRoadwayNetwork property","text":"

Return a ModelRoadwayNetwork object for this network.

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.network_hash","title":"network_hash: str property","text":"

Hash of the links and nodes dataframes.

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.shapes_df","title":"shapes_df: DataFrame[RoadShapesTable] property writable","text":"

Load and return RoadShapesTable.

If not already loaded, will read from shapes_file and return. If shapes_file is None, will return an empty dataframe with the right schema. If shapes_df is already set, will return that.

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.summary","title":"summary: dict property","text":"

Quick summary dictionary of number of links, nodes.

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.add_incident_link_data_to_nodes","title":"add_incident_link_data_to_nodes(links_df=None, nodes_df=None, link_variables=[]) staticmethod","text":"

Add data from links going to/from nodes to node.

Parameters:

Name Type Description Default links_df Optional[DataFrame[RoadLinksTable]]

if specified, will assess connectivity of this links list rather than self.links_df

None nodes_df Optional[DataFrame[RoadNodesTable]]

if specified, will assess connectivity of this nodes list rather than self.nodes_df

None link_variables list

list of columns in links dataframe to add to incident nodes

[]

Returns:

Type Description DataFrame[RoadNodesTable]

nodes DataFrame with link data where length is N*number of links going in/out

Source code in network_wrangler/roadway/network.py
@staticmethod\ndef add_incident_link_data_to_nodes(\n    links_df: Optional[DataFrame[RoadLinksTable]] = None,\n    nodes_df: Optional[DataFrame[RoadNodesTable]] = None,\n    link_variables: list = [],\n) -> DataFrame[RoadNodesTable]:\n    \"\"\"Add data from links going to/from nodes to node.\n\n    Args:\n        links_df: if specified, will assess connectivity of this\n            links list rather than self.links_df\n        nodes_df: if specified, will assess connectivity of this\n            nodes list rather than self.nodes_df\n        link_variables: list of columns in links dataframe to add to incident nodes\n\n    Returns:\n        nodes DataFrame with link data where length is N*number of links going in/out\n    \"\"\"\n    WranglerLogger.debug(\"Adding following link data to nodes: \".format())\n\n    _link_vals_to_nodes = [x for x in link_variables if x in links_df.columns]\n    if link_variables not in _link_vals_to_nodes:\n        WranglerLogger.warning(\n            \"Following columns not in links_df and wont be added to nodes: {} \".format(\n                list(set(link_variables) - set(_link_vals_to_nodes))\n            )\n        )\n\n    _nodes_from_links_A = nodes_df.merge(\n        links_df[[links_df.params.from_node] + _link_vals_to_nodes],\n        how=\"outer\",\n        left_on=nodes_df.params.primary_key,\n        right_on=links_df.params.from_node,\n    )\n    _nodes_from_links_B = nodes_df.merge(\n        links_df[[links_df.params.to_node] + _link_vals_to_nodes],\n        how=\"outer\",\n        left_on=nodes_df.params.primary_key,\n        right_on=links_df.params.to_node,\n    )\n    _nodes_from_links_ab = pd.concat([_nodes_from_links_A, _nodes_from_links_B])\n\n    return _nodes_from_links_ab\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.add_links","title":"add_links(add_links_df)","text":"

Validate combined links_df with LinksSchema before adding to self.links_df.

Parameters:

Name Type Description Default add_links_df Union[DataFrame, DataFrame[RoadLinksTable]]

Dataframe of additional links to add.

required Source code in network_wrangler/roadway/network.py
def add_links(self, add_links_df: Union[pd.DataFrame, DataFrame[RoadLinksTable]]):\n    \"\"\"Validate combined links_df with LinksSchema before adding to self.links_df.\n\n    Args:\n        add_links_df: Dataframe of additional links to add.\n    \"\"\"\n    if not isinstance(add_links_df, RoadLinksTable):\n        add_links_df = data_to_links_df(add_links_df, nodes_df=self.nodes_df)\n    self.links_df = RoadLinksTable(pd.concat([self.links_df, add_links_df]))\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.add_nodes","title":"add_nodes(add_nodes_df)","text":"

Validate combined nodes_df with NodesSchema before adding to self.nodes_df.

Parameters:

Name Type Description Default add_nodes_df Union[DataFrame, DataFrame[RoadNodesTable]]

Dataframe of additional nodes to add.

required Source code in network_wrangler/roadway/network.py
def add_nodes(self, add_nodes_df: Union[pd.DataFrame, DataFrame[RoadNodesTable]]):\n    \"\"\"Validate combined nodes_df with NodesSchema before adding to self.nodes_df.\n\n    Args:\n        add_nodes_df: Dataframe of additional nodes to add.\n    \"\"\"\n    if not isinstance(add_nodes_df, RoadNodesTable):\n        add_nodes_df = data_to_nodes_df(add_nodes_df)\n    self.nodes_df = RoadNodesTable(pd.concat([self.nodes_df, add_nodes_df]))\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.add_shapes","title":"add_shapes(add_shapes_df)","text":"

Validate combined shapes_df with RoadShapesTable efore adding to self.shapes_df.

Parameters:

Name Type Description Default add_shapes_df Union[DataFrame, DataFrame[RoadShapesTable]]

Dataframe of additional shapes to add.

required Source code in network_wrangler/roadway/network.py
def add_shapes(self, add_shapes_df: Union[pd.DataFrame, DataFrame[RoadShapesTable]]):\n    \"\"\"Validate combined shapes_df with RoadShapesTable efore adding to self.shapes_df.\n\n    Args:\n        add_shapes_df: Dataframe of additional shapes to add.\n    \"\"\"\n    if not isinstance(add_shapes_df, RoadShapesTable):\n        add_shapes_df = df_to_shapes_df(add_shapes_df)\n    WranglerLogger.debug(f\"add_shapes_df: \\n{add_shapes_df}\")\n    WranglerLogger.debug(f\"self.shapes_df: \\n{self.shapes_df}\")\n    together_df = pd.concat([self.shapes_df, add_shapes_df])\n    WranglerLogger.debug(f\"together_df: \\n{together_df}\")\n    self.shapes_df = RoadShapesTable(pd.concat([self.shapes_df, add_shapes_df]))\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.apply","title":"apply(project_card)","text":"

Wrapper method to apply a roadway project, returning a new RoadwayNetwork instance.

Parameters:

Name Type Description Default project_card Union[ProjectCard, dict]

either a dictionary of the project card object or ProjectCard instance

required Source code in network_wrangler/roadway/network.py
def apply(self, project_card: Union[ProjectCard, dict]) -> RoadwayNetwork:\n    \"\"\"Wrapper method to apply a roadway project, returning a new RoadwayNetwork instance.\n\n    Args:\n        project_card: either a dictionary of the project card object or ProjectCard instance\n    \"\"\"\n    if not (isinstance(project_card, ProjectCard) or isinstance(project_card, SubProject)):\n        project_card = ProjectCard(project_card)\n\n    project_card.validate()\n\n    if project_card.sub_projects:\n        for sp in project_card.sub_projects:\n            WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n            self._apply_change(sp)\n        return self\n    else:\n        return self._apply_change(project_card)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.clean_unused_nodes","title":"clean_unused_nodes()","text":"

Removes any unused nodes from network that aren\u2019t referenced by links_df.

NOTE: does not check if these nodes are used by transit, so use with caution.

Source code in network_wrangler/roadway/network.py
def clean_unused_nodes(self):\n    \"\"\"Removes any unused nodes from network that aren't referenced by links_df.\n\n    NOTE: does not check if these nodes are used by transit, so use with caution.\n    \"\"\"\n    from .nodes.nodes import node_ids_without_links\n\n    node_ids = node_ids_without_links(self.nodes_df, self.links_df)\n    self.nodes_df = self.nodes_df.drop(node_ids)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.clean_unused_shapes","title":"clean_unused_shapes()","text":"

Removes any unused shapes from network that aren\u2019t referenced by links_df.

Source code in network_wrangler/roadway/network.py
def clean_unused_shapes(self):\n    \"\"\"Removes any unused shapes from network that aren't referenced by links_df.\"\"\"\n    from .shapes.shapes import shape_ids_without_links\n\n    del_shape_ids = shape_ids_without_links(self.shapes_df, self.links_df)\n    self.shapes_df = self.shapes_df.drop(del_shape_ids)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.coerce_crs","title":"coerce_crs(v, info)","text":"

Coerce crs of nodes_df and links_df to network crs.

Source code in network_wrangler/roadway/network.py
@field_validator(\"nodes_df\", \"links_df\")\ndef coerce_crs(cls, v, info):\n    \"\"\"Coerce crs of nodes_df and links_df to network crs.\"\"\"\n    net_crs = info.data[\"crs\"]\n    if v.crs != net_crs:\n        WranglerLogger.warning(\n            f\"CRS of links_df ({v.crs}) doesn't match network crs {net_crs}. \\\n                Changing to network crs.\"\n        )\n        v.to_crs(net_crs)\n    return v\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.delete_links","title":"delete_links(selection_dict, clean_nodes=False, clean_shapes=False)","text":"

Deletes links based on selection dictionary and optionally associated nodes and shapes.

Parameters:

Name Type Description Default selection_dict SelectLinks

Dictionary describing link selections as follows: all: Optional[bool] = False. If true, will select all. name: Optional[list[str]] ref: Optional[list[str]] osm_link_id:Optional[list[str]] model_link_id: Optional[list[int]] modes: Optional[list[str]]. Defaults to \u201cany\u201d ignore_missing: if true, will not error when defaults to True. \u2026plus any other link property to select on top of these.

required clean_nodes bool

If True, will clean nodes uniquely associated with deleted links. Defaults to False.

False clean_shapes bool

If True, will clean nodes uniquely associated with deleted links. Defaults to False.

False Source code in network_wrangler/roadway/network.py
def delete_links(\n    self,\n    selection_dict: SelectLinksDict,\n    clean_nodes: bool = False,\n    clean_shapes: bool = False,\n):\n    \"\"\"Deletes links based on selection dictionary and optionally associated nodes and shapes.\n\n    Args:\n        selection_dict (SelectLinks): Dictionary describing link selections as follows:\n            `all`: Optional[bool] = False. If true, will select all.\n            `name`: Optional[list[str]]\n            `ref`: Optional[list[str]]\n            `osm_link_id`:Optional[list[str]]\n            `model_link_id`: Optional[list[int]]\n            `modes`: Optional[list[str]]. Defaults to \"any\"\n            `ignore_missing`: if true, will not error when defaults to True.\n            ...plus any other link property to select on top of these.\n        clean_nodes (bool, optional): If True, will clean nodes uniquely associated with\n            deleted links. Defaults to False.\n        clean_shapes (bool, optional): If True, will clean nodes uniquely associated with\n            deleted links. Defaults to False.\n    \"\"\"\n    selection_dict = SelectLinksDict(**selection_dict).model_dump(\n        exclude_none=True, by_alias=True\n    )\n    selection = self.get_selection({\"links\": selection_dict})\n\n    if clean_nodes:\n        node_ids_to_delete = node_ids_unique_to_link_ids(\n            selection.selected_links, selection.selected_links_df, self.nodes_df\n        )\n        WranglerLogger.debug(\n            f\"Dropping nodes associated with dropped links: \\n{node_ids_to_delete}\"\n        )\n        self.nodes_df = delete_nodes_by_ids(self.nodes_df, del_node_ids=node_ids_to_delete)\n\n    if clean_shapes:\n        shape_ids_to_delete = shape_ids_unique_to_link_ids(\n            selection.selected_links, selection.selected_links_df, self.shapes_df\n        )\n        WranglerLogger.debug(\n            f\"Dropping shapes associated with dropped links: \\n{shape_ids_to_delete}\"\n        )\n        self.shapes_df = delete_shapes_by_ids(\n            self.shapes_df, del_shape_ids=shape_ids_to_delete\n        )\n\n    self.links_df = delete_links_by_ids(\n        self.links_df,\n        selection.selected_links,\n        ignore_missing=selection.ignore_missing,\n    )\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.delete_nodes","title":"delete_nodes(selection_dict, remove_links=False)","text":"

Deletes nodes from roadway network. Wont delete nodes used by links in network.

Parameters:

Name Type Description Default selection_dict Union[dict, SelectNodesDict]

dictionary of node selection criteria in the form of a SelectNodesDict.

required remove_links bool

if True, will remove any links that are associated with the nodes. If False, will only remove nodes if they are not associated with any links. Defaults to False.

False

Raises:

Type Description NodeDeletionError

If not ignore_missing and selected nodes to delete aren\u2019t in network

Source code in network_wrangler/roadway/network.py
def delete_nodes(\n    self,\n    selection_dict: Union[dict, SelectNodesDict],\n    remove_links: bool = False,\n) -> None:\n    \"\"\"Deletes nodes from roadway network. Wont delete nodes used by links in network.\n\n    Args:\n        selection_dict: dictionary of node selection criteria in the form of a SelectNodesDict.\n        remove_links: if True, will remove any links that are associated with the nodes.\n            If False, will only remove nodes if they are not associated with any links.\n            Defaults to False.\n\n    raises:\n        NodeDeletionError: If not ignore_missing and selected nodes to delete aren't in network\n    \"\"\"\n    if not isinstance(selection_dict, SelectNodesDict):\n        selection_dict = SelectNodesDict(**selection_dict)\n    selection_dict = selection_dict.model_dump(exclude_none=True, by_alias=True)\n    selection: RoadwayNodeSelection = self.get_selection(\n        {\"nodes\": selection_dict},\n    )\n    if remove_links:\n        del_node_ids = selection.selected_nodes\n        link_ids = self.links_with_nodes(selection.selected_nodes).model_link_id.to_list()\n        WranglerLogger.info(f\"Removing {len(link_ids)} links associated with nodes.\")\n        self.delete_links({\"model_link_id\": link_ids})\n    else:\n        unused_node_ids = node_ids_without_links(self.nodes_df, self.links_df)\n        del_node_ids = list(set(selection.selected_nodes).intersection(unused_node_ids))\n\n    self.nodes_df = delete_nodes_by_ids(\n        self.nodes_df, del_node_ids, ignore_missing=selection.ignore_missing\n    )\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.get_modal_graph","title":"get_modal_graph(mode)","text":"

Return a networkx graph of the network for a specific mode.

Parameters:

Name Type Description Default mode

mode of the network, one of drive,transit,walk, bike

required Source code in network_wrangler/roadway/network.py
def get_modal_graph(self, mode) -> MultiDiGraph:\n    \"\"\"Return a networkx graph of the network for a specific mode.\n\n    Args:\n        mode: mode of the network, one of `drive`,`transit`,`walk`, `bike`\n    \"\"\"\n    from .graph import net_to_graph\n\n    if self._modal_graphs[mode][\"hash\"] != self.modal_graph_hash(mode):\n        self._modal_graphs[mode][\"graph\"] = net_to_graph(self, mode)\n\n    return self._modal_graphs[mode][\"graph\"]\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.get_property_by_timespan_and_group","title":"get_property_by_timespan_and_group(link_property, category=DEFAULT_CATEGORY, timespan=DEFAULT_TIMESPAN, strict_timespan_match=False, min_overlap_minutes=60)","text":"

Returns a new dataframe with model_link_id and link property by category and timespan.

Convenience method for backward compatability.

Parameters:

Name Type Description Default link_property str

link property to query

required category Union[str, int]

category to query or a list of categories. Defaults to DEFAULT_CATEGORY.

DEFAULT_CATEGORY timespan TimespanString

timespan to query in the form of [\u201cHH:MM\u201d,\u201dHH:MM\u201d]. Defaults to DEFAULT_TIMESPAN.

DEFAULT_TIMESPAN strict_timespan_match bool

If True, will only return links that match the timespan exactly. Defaults to False.

False min_overlap_minutes int

If strict_timespan_match is False, will return links that overlap with the timespan by at least this many minutes. Defaults to 60.

60 Source code in network_wrangler/roadway/network.py
def get_property_by_timespan_and_group(\n    self,\n    link_property: str,\n    category: Union[str, int] = DEFAULT_CATEGORY,\n    timespan: TimespanString = DEFAULT_TIMESPAN,\n    strict_timespan_match: bool = False,\n    min_overlap_minutes: int = 60,\n) -> Any:\n    \"\"\"Returns a new dataframe with model_link_id and link property by category and timespan.\n\n    Convenience method for backward compatability.\n\n    Args:\n        link_property: link property to query\n        category: category to query or a list of categories. Defaults to DEFAULT_CATEGORY.\n        timespan: timespan to query in the form of [\"HH:MM\",\"HH:MM\"].\n            Defaults to DEFAULT_TIMESPAN.\n        strict_timespan_match: If True, will only return links that match the timespan exactly.\n            Defaults to False.\n        min_overlap_minutes: If strict_timespan_match is False, will return links that overlap\n            with the timespan by at least this many minutes. Defaults to 60.\n    \"\"\"\n    from .links.scopes import prop_for_scope\n\n    return prop_for_scope(\n        self.links_df,\n        link_property,\n        timespan=timespan,\n        category=category,\n        strict_timespan_match=strict_timespan_match,\n        min_overlap_minutes=min_overlap_minutes,\n    )\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.get_selection","title":"get_selection(selection_dict, overwrite=False)","text":"

Return selection if it already exists, otherwise performs selection.

Parameters:

Name Type Description Default selection_dict dict

SelectFacility dictionary.

required overwrite bool

if True, will overwrite any previously cached searches. Defaults to False.

False Source code in network_wrangler/roadway/network.py
def get_selection(\n    self,\n    selection_dict: Union[dict, SelectFacility],\n    overwrite: bool = False,\n) -> Union[RoadwayNodeSelection, RoadwayLinkSelection]:\n    \"\"\"Return selection if it already exists, otherwise performs selection.\n\n    Args:\n        selection_dict (dict): SelectFacility dictionary.\n        overwrite: if True, will overwrite any previously cached searches. Defaults to False.\n    \"\"\"\n    key = _create_selection_key(selection_dict)\n    if (key in self._selections) and not overwrite:\n        WranglerLogger.debug(f\"Using cached selection from key: {key}\")\n        return self._selections[key]\n\n    if isinstance(selection_dict, SelectFacility):\n        selection_data = selection_dict\n    elif isinstance(selection_dict, SelectLinksDict):\n        selection_data = SelectFacility(links=selection_dict)\n    elif isinstance(selection_dict, SelectNodesDict):\n        selection_data = SelectFacility(nodes=selection_dict)\n    elif isinstance(selection_dict, dict):\n        selection_data = SelectFacility(**selection_dict)\n    else:\n        WranglerLogger.error(f\"`selection_dict` arg must be a dictionary or SelectFacility model.\\\n                         Received: {selection_dict} of type {type(selection_dict)}\")\n        raise SelectionError(\"selection_dict arg must be a dictionary or SelectFacility model\")\n\n    WranglerLogger.debug(f\"Getting selection from key: {key}\")\n    if selection_data.feature_types in [\"links\", \"segment\"]:\n        return RoadwayLinkSelection(self, selection_dict)\n    elif selection_data.feature_types == \"nodes\":\n        return RoadwayNodeSelection(self, selection_dict)\n    else:\n        WranglerLogger.error(\"Selection data should be of type 'segment', 'links' or 'nodes'.\")\n        raise SelectionError(\"Selection data should be of type 'segment', 'links' or 'nodes'.\")\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.has_link","title":"has_link(ab)","text":"

Returns true if network has links with AB values.

Parameters:

Name Type Description Default ab tuple

Tuple of values corresponding with A and B.

required Source code in network_wrangler/roadway/network.py
def has_link(self, ab: tuple) -> bool:\n    \"\"\"Returns true if network has links with AB values.\n\n    Args:\n        ab: Tuple of values corresponding with A and B.\n    \"\"\"\n    sel_a, sel_b = ab\n    has_link = self.links_df[self.links_df[[\"A\", \"B\"]]].isin({\"A\": sel_a, \"B\": sel_b}).any()\n    return has_link\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.has_node","title":"has_node(model_node_id)","text":"

Queries if network has node based on model_node_id.

Parameters:

Name Type Description Default model_node_id int

model_node_id to check for.

required Source code in network_wrangler/roadway/network.py
def has_node(self, model_node_id: int) -> bool:\n    \"\"\"Queries if network has node based on model_node_id.\n\n    Args:\n        model_node_id: model_node_id to check for.\n    \"\"\"\n    has_node = self.nodes_df[self.nodes_df.model_node_id].isin([model_node_id]).any()\n\n    return has_node\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.is_connected","title":"is_connected(mode)","text":"

Determines if the network graph is \u201cstrongly\u201d connected.

A graph is strongly connected if each vertex is reachable from every other vertex.

Parameters:

Name Type Description Default mode str

mode of the network, one of drive,transit,walk, bike

required Source code in network_wrangler/roadway/network.py
def is_connected(self, mode: str) -> bool:\n    \"\"\"Determines if the network graph is \"strongly\" connected.\n\n    A graph is strongly connected if each vertex is reachable from every other vertex.\n\n    Args:\n        mode:  mode of the network, one of `drive`,`transit`,`walk`, `bike`\n    \"\"\"\n    is_connected = nx.is_strongly_connected(self.get_modal_graph(mode))\n\n    return is_connected\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.links_with_link_ids","title":"links_with_link_ids(link_ids)","text":"

Return subset of links_df based on link_ids list.

Source code in network_wrangler/roadway/network.py
def links_with_link_ids(self, link_ids: List[int]) -> DataFrame[RoadLinksTable]:\n    \"\"\"Return subset of links_df based on link_ids list.\"\"\"\n    return filter_links_to_ids(self.links_df, link_ids)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.links_with_nodes","title":"links_with_nodes(node_ids)","text":"

Return subset of links_df based on node_ids list.

Source code in network_wrangler/roadway/network.py
def links_with_nodes(self, node_ids: List[int]) -> DataFrame[RoadLinksTable]:\n    \"\"\"Return subset of links_df based on node_ids list.\"\"\"\n    return filter_links_to_node_ids(self.links_df, node_ids)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.modal_graph_hash","title":"modal_graph_hash(mode)","text":"

Hash of the links in order to detect a network change from when graph created.

Source code in network_wrangler/roadway/network.py
def modal_graph_hash(self, mode) -> str:\n    \"\"\"Hash of the links in order to detect a network change from when graph created.\"\"\"\n    _value = str.encode(self.links_df.df_hash() + \"-\" + mode)\n    _hash = hashlib.sha256(_value).hexdigest()\n\n    return _hash\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.move_nodes","title":"move_nodes(node_geometry_change_table)","text":"

Moves nodes based on updated geometry along with associated links and shape geometry.

Parameters:

Name Type Description Default node_geometry_change_table DataFrame[NodeGeometryChangeTable]

a table with model_node_id, X, Y, and CRS.

required Source code in network_wrangler/roadway/network.py
def move_nodes(\n    self,\n    node_geometry_change_table: DataFrame[NodeGeometryChangeTable],\n):\n    \"\"\"Moves nodes based on updated geometry along with associated links and shape geometry.\n\n    Args:\n        node_geometry_change_table: a table with model_node_id, X, Y, and CRS.\n    \"\"\"\n    node_geometry_change_table = NodeGeometryChangeTable(node_geometry_change_table)\n    node_ids = node_geometry_change_table.model_node_id.to_list()\n    WranglerLogger.debug(f\"Moving nodes: {node_ids}\")\n    self.nodes_df = edit_node_geometry(self.nodes_df, node_geometry_change_table)\n    self.links_df = edit_link_geometry_from_nodes(self.links_df, self.nodes_df, node_ids)\n    self.shapes_df = edit_shape_geometry_from_nodes(\n        self.shapes_df, self.links_df, self.nodes_df, node_ids\n    )\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.nodes_in_links","title":"nodes_in_links()","text":"

Returns subset of self.nodes_df that are in self.links_df.

Source code in network_wrangler/roadway/network.py
def nodes_in_links(self) -> DataFrame[RoadNodesTable]:\n    \"\"\"Returns subset of self.nodes_df that are in self.links_df.\"\"\"\n    return filter_nodes_to_links(self.links_df, self.nodes_df)\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork","title":"TransitNetwork","text":"

Bases: object

Representation of a Transit Network.

Typical usage example:

import network_wrangler as wr\ntc=wr.load_transit(stpaul_gtfs)\n

Attributes:

Name Type Description feed

gtfs feed object with interlinked tables.

road_net RoadwayNetwork

Associated roadway network object.

graph MultiDiGraph

Graph for associated roadway network object.

feed_path str

Where the feed was read in from.

validated_frequencies bool

The frequencies have been validated.

validated_road_network_consistency

The network has been validated against the road network.

Source code in network_wrangler/transit/network.py
class TransitNetwork(object):\n    \"\"\"Representation of a Transit Network.\n\n    Typical usage example:\n    ``` py\n    import network_wrangler as wr\n    tc=wr.load_transit(stpaul_gtfs)\n    ```\n\n    Attributes:\n        feed: gtfs feed object with interlinked tables.\n        road_net (RoadwayNetwork): Associated roadway network object.\n        graph (nx.MultiDiGraph): Graph for associated roadway network object.\n        feed_path (str): Where the feed was read in from.\n        validated_frequencies (bool): The frequencies have been validated.\n        validated_road_network_consistency (): The network has been validated against\n            the road network.\n    \"\"\"\n\n    TIME_COLS = [\"arrival_time\", \"departure_time\", \"start_time\", \"end_time\"]\n\n    def __init__(self, feed: Feed):\n        \"\"\"Constructor for TransitNetwork.\n\n        Args:\n            feed: Feed object representing the transit network gtfs tables\n        \"\"\"\n        WranglerLogger.debug(\"Creating new TransitNetwork.\")\n\n        self._road_net: Optional[RoadwayNetwork] = None\n        self.feed: Feed = feed\n        self.graph: nx.MultiDiGraph = None\n\n        # initialize\n        self._consistent_with_road_net = False\n\n        # cached selections\n        self._selections: dict[str, dict] = {}\n\n    @property\n    def feed_path(self):\n        \"\"\"Pass through property from Feed.\"\"\"\n        return self.feed.feed_path\n\n    @property\n    def config(self):\n        \"\"\"Pass through property from Feed.\"\"\"\n        return self.feed.config\n\n    @property\n    def feed(self):\n        \"\"\"Feed associated with the transit network.\"\"\"\n        return self._feed\n\n    @feed.setter\n    def feed(self, feed: Feed):\n        if not isinstance(feed, Feed):\n            msg = f\"TransitNetwork's feed value must be a valid Feed instance. \\\n                             This is a {type(feed)}.\"\n            WranglerLogger.error(msg)\n            raise ValueError(msg)\n        if self._road_net is None or transit_road_net_consistency(feed, self._road_net):\n            self._feed = feed\n            self._stored_feed_hash = copy.deepcopy(feed.hash)\n        else:\n            WranglerLogger.error(\"Can't assign Feed inconsistent with set Roadway Network.\")\n            raise TransitRoadwayConsistencyError(\n                \"Can't assign Feed inconsistent with set RoadwayNetwork.\"\n            )\n\n    @property\n    def road_net(self) -> RoadwayNetwork:\n        \"\"\"Roadway network associated with the transit network.\"\"\"\n        return self._road_net\n\n    @road_net.setter\n    def road_net(self, road_net: RoadwayNetwork):\n        if not isinstance(road_net, RoadwayNetwork):\n            msg = f\"TransitNetwork's road_net: value must be a valid RoadwayNetwork instance. \\\n                             This is a {type(road_net)}.\"\n            WranglerLogger.error(msg)\n            raise ValueError(msg)\n        if transit_road_net_consistency(self.feed, road_net):\n            self._road_net = road_net\n            self._stored_road_net_hash = copy.deepcopy(self.road_net.network_hash)\n            self._consistent_with_road_net = True\n        else:\n            WranglerLogger.error(\n                \"Can't assign inconsistent RoadwayNetwork - Roadway Network not \\\n                                 set, but can be referenced separately.\"\n            )\n            raise TransitRoadwayConsistencyError(\"Can't assign inconsistent RoadwayNetwork.\")\n\n    @property\n    def feed_hash(self):\n        \"\"\"Return the hash of the feed.\"\"\"\n        return self.feed.hash\n\n    @property\n    def consistent_with_road_net(self) -> bool:\n        \"\"\"Indicate if road_net is consistent with transit network.\n\n        Checks the network hash of when consistency was last evaluated. If transit network or\n        roadway network has changed, will re-evaluate consistency and return the updated value and\n        update self._stored_road_net_hash.\n\n        Returns:\n            Boolean indicating if road_net is consistent with transit network.\n        \"\"\"\n        updated_road = self.road_net.network_hash != self._stored_road_net_hash\n        updated_feed = self.feed_hash != self._stored_feed_hash\n\n        if updated_road or updated_feed:\n            self._consistent_with_road_net = transit_road_net_consistency(self.feed, self.road_net)\n            self._stored_road_net_hash = copy.deepcopy(self.road_net.network_hash)\n            self._stored_feed_hash = copy.deepcopy(self.feed_hash)\n        return self._consistent_with_road_net\n\n    def __deepcopy__(self, memo):\n        \"\"\"Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.\"\"\"\n        COPY_REF_NOT_VALUE = [\"_road_net\"]\n        # Create a new, empty instance\n        copied_net = self.__class__.__new__(self.__class__)\n        # Return the new TransitNetwork instance\n        attribute_dict = vars(self)\n\n        # Copy the attributes to the new instance\n        for attr_name, attr_value in attribute_dict.items():\n            # WranglerLogger.debug(f\"Copying {attr_name}\")\n            if attr_name in COPY_REF_NOT_VALUE:\n                # If the attribute is in the COPY_REF_NOT_VALUE list, assign the reference\n                setattr(copied_net, attr_name, attr_value)\n            else:\n                # WranglerLogger.debug(f\"making deep copy: {attr_name}\")\n                # For other attributes, perform a deep copy\n                setattr(copied_net, attr_name, copy.deepcopy(attr_value, memo))\n\n        return copied_net\n\n    def deepcopy(self):\n        \"\"\"Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.\"\"\"\n        return copy.deepcopy(self)\n\n    @property\n    def stops_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return stops as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n        return to_points_gdf(self.feed.stops, nodes_df=ref_nodes)\n\n    @property\n    def shapes_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return aggregated shapes as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n        return shapes_to_trip_shapes_gdf(self.feed.shapes, ref_nodes_df=ref_nodes)\n\n    @property\n    def shape_links_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return shape-links as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n        return shapes_to_shape_links_gdf(self.feed.shapes, ref_nodes_df=ref_nodes)\n\n    @property\n    def stop_time_links_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return stop-time-links as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n        return stop_times_to_stop_time_links_gdf(\n            self.feed.stop_times, self.feed.stops, ref_nodes_df=ref_nodes\n        )\n\n    @property\n    def stop_times_points_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return stop-time-points as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n\n        return stop_times_to_stop_time_points_gdf(\n            self.feed.stop_times, self.feed.stops, ref_nodes_df=ref_nodes\n        )\n\n    def get_selection(\n        self,\n        selection_dict: dict,\n        overwrite: bool = False,\n    ) -> TransitSelection:\n        \"\"\"Return selection if it already exists, otherwise performs selection.\n\n        Will raise an error if no trips found.\n\n        Args:\n            selection_dict (dict): _description_\n            overwrite: if True, will overwrite any previously cached searches. Defaults to False.\n\n        Returns:\n            Selection: Selection object\n        \"\"\"\n        key = dict_to_hexkey(selection_dict)\n\n        if (key not in self._selections) or overwrite:\n            WranglerLogger.debug(f\"Performing selection from key: {key}\")\n            self._selections[key] = TransitSelection(self, selection_dict)\n        else:\n            WranglerLogger.debug(f\"Using cached selection from key: {key}\")\n\n        if not self._selections[key]:\n            WranglerLogger.debug(\n                f\"No links or nodes found for selection dict: \\n {selection_dict}\"\n            )\n            raise ValueError(\"Selection not successful.\")\n        return self._selections[key]\n\n    def apply(self, project_card: Union[ProjectCard, dict], **kwargs) -> \"TransitNetwork\":\n        \"\"\"Wrapper method to apply a roadway project, returning a new TransitNetwork instance.\n\n        Args:\n            project_card: either a dictionary of the project card object or ProjectCard instance\n            **kwargs: keyword arguments to pass to project application\n        \"\"\"\n        if not (isinstance(project_card, ProjectCard) or isinstance(project_card, SubProject)):\n            project_card = ProjectCard(project_card)\n\n        if not project_card.valid:\n            WranglerLogger.error(\"Invalid Project Card: {project_card}\")\n            raise ValueError(f\"Project card {project_card.project} not valid.\")\n\n        if project_card.sub_projects:\n            for sp in project_card.sub_projects:\n                WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n                self._apply_change(sp, **kwargs)\n            return self\n        else:\n            return self._apply_change(project_card, **kwargs)\n\n    def _apply_change(\n        self,\n        change: Union[ProjectCard, SubProject],\n        reference_road_net: Optional[RoadwayNetwork] = None,\n    ) -> TransitNetwork:\n        \"\"\"Apply a single change: a single-project project or a sub-project.\"\"\"\n        if not isinstance(change, SubProject):\n            WranglerLogger.info(f\"Applying Project to Transit Network: {change.project}\")\n\n        if change.change_type == \"transit_property_change\":\n            return apply_transit_property_change(\n                self,\n                self.get_selection(change.service),\n                change.transit_property_change,\n            )\n\n        elif change.change_type == \"transit_routing_change\":\n            return apply_transit_routing_change(\n                self,\n                self.get_selection(change.service),\n                change.transit_routing_change,\n                reference_road_net=reference_road_net,\n            )\n\n        elif change.change_type == \"add_new_route\":\n            return apply_add_transit_route_change(self, change.transit_route_addition)\n\n        elif change.change_type == \"roadway_deletion\":\n            # FIXME\n            raise NotImplementedError(\"Roadway deletion check not yet implemented.\")\n\n        elif change.change_type == \"pycode\":\n            return apply_calculated_transit(self, change.pycode)\n\n        else:\n            msg = f\"Not a currently valid transit project: {change}.\"\n            WranglerLogger.error(msg)\n            raise NotImplementedError(msg)\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.config","title":"config property","text":"

Pass through property from Feed.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.consistent_with_road_net","title":"consistent_with_road_net: bool property","text":"

Indicate if road_net is consistent with transit network.

Checks the network hash of when consistency was last evaluated. If transit network or roadway network has changed, will re-evaluate consistency and return the updated value and update self._stored_road_net_hash.

Returns:

Type Description bool

Boolean indicating if road_net is consistent with transit network.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.feed","title":"feed property writable","text":"

Feed associated with the transit network.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.feed_hash","title":"feed_hash property","text":"

Return the hash of the feed.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.feed_path","title":"feed_path property","text":"

Pass through property from Feed.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.road_net","title":"road_net: RoadwayNetwork property writable","text":"

Roadway network associated with the transit network.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.shape_links_gdf","title":"shape_links_gdf: gpd.GeoDataFrame property","text":"

Return shape-links as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.shapes_gdf","title":"shapes_gdf: gpd.GeoDataFrame property","text":"

Return aggregated shapes as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.stop_time_links_gdf","title":"stop_time_links_gdf: gpd.GeoDataFrame property","text":"

Return stop-time-links as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.stop_times_points_gdf","title":"stop_times_points_gdf: gpd.GeoDataFrame property","text":"

Return stop-time-points as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.stops_gdf","title":"stops_gdf: gpd.GeoDataFrame property","text":"

Return stops as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.__deepcopy__","title":"__deepcopy__(memo)","text":"

Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.

Source code in network_wrangler/transit/network.py
def __deepcopy__(self, memo):\n    \"\"\"Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.\"\"\"\n    COPY_REF_NOT_VALUE = [\"_road_net\"]\n    # Create a new, empty instance\n    copied_net = self.__class__.__new__(self.__class__)\n    # Return the new TransitNetwork instance\n    attribute_dict = vars(self)\n\n    # Copy the attributes to the new instance\n    for attr_name, attr_value in attribute_dict.items():\n        # WranglerLogger.debug(f\"Copying {attr_name}\")\n        if attr_name in COPY_REF_NOT_VALUE:\n            # If the attribute is in the COPY_REF_NOT_VALUE list, assign the reference\n            setattr(copied_net, attr_name, attr_value)\n        else:\n            # WranglerLogger.debug(f\"making deep copy: {attr_name}\")\n            # For other attributes, perform a deep copy\n            setattr(copied_net, attr_name, copy.deepcopy(attr_value, memo))\n\n    return copied_net\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.__init__","title":"__init__(feed)","text":"

Constructor for TransitNetwork.

Parameters:

Name Type Description Default feed Feed

Feed object representing the transit network gtfs tables

required Source code in network_wrangler/transit/network.py
def __init__(self, feed: Feed):\n    \"\"\"Constructor for TransitNetwork.\n\n    Args:\n        feed: Feed object representing the transit network gtfs tables\n    \"\"\"\n    WranglerLogger.debug(\"Creating new TransitNetwork.\")\n\n    self._road_net: Optional[RoadwayNetwork] = None\n    self.feed: Feed = feed\n    self.graph: nx.MultiDiGraph = None\n\n    # initialize\n    self._consistent_with_road_net = False\n\n    # cached selections\n    self._selections: dict[str, dict] = {}\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.apply","title":"apply(project_card, **kwargs)","text":"

Wrapper method to apply a roadway project, returning a new TransitNetwork instance.

Parameters:

Name Type Description Default project_card Union[ProjectCard, dict]

either a dictionary of the project card object or ProjectCard instance

required **kwargs

keyword arguments to pass to project application

{} Source code in network_wrangler/transit/network.py
def apply(self, project_card: Union[ProjectCard, dict], **kwargs) -> \"TransitNetwork\":\n    \"\"\"Wrapper method to apply a roadway project, returning a new TransitNetwork instance.\n\n    Args:\n        project_card: either a dictionary of the project card object or ProjectCard instance\n        **kwargs: keyword arguments to pass to project application\n    \"\"\"\n    if not (isinstance(project_card, ProjectCard) or isinstance(project_card, SubProject)):\n        project_card = ProjectCard(project_card)\n\n    if not project_card.valid:\n        WranglerLogger.error(\"Invalid Project Card: {project_card}\")\n        raise ValueError(f\"Project card {project_card.project} not valid.\")\n\n    if project_card.sub_projects:\n        for sp in project_card.sub_projects:\n            WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n            self._apply_change(sp, **kwargs)\n        return self\n    else:\n        return self._apply_change(project_card, **kwargs)\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.deepcopy","title":"deepcopy()","text":"

Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.

Source code in network_wrangler/transit/network.py
def deepcopy(self):\n    \"\"\"Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.\"\"\"\n    return copy.deepcopy(self)\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.get_selection","title":"get_selection(selection_dict, overwrite=False)","text":"

Return selection if it already exists, otherwise performs selection.

Will raise an error if no trips found.

Parameters:

Name Type Description Default selection_dict dict

description

required overwrite bool

if True, will overwrite any previously cached searches. Defaults to False.

False

Returns:

Name Type Description Selection TransitSelection

Selection object

Source code in network_wrangler/transit/network.py
def get_selection(\n    self,\n    selection_dict: dict,\n    overwrite: bool = False,\n) -> TransitSelection:\n    \"\"\"Return selection if it already exists, otherwise performs selection.\n\n    Will raise an error if no trips found.\n\n    Args:\n        selection_dict (dict): _description_\n        overwrite: if True, will overwrite any previously cached searches. Defaults to False.\n\n    Returns:\n        Selection: Selection object\n    \"\"\"\n    key = dict_to_hexkey(selection_dict)\n\n    if (key not in self._selections) or overwrite:\n        WranglerLogger.debug(f\"Performing selection from key: {key}\")\n        self._selections[key] = TransitSelection(self, selection_dict)\n    else:\n        WranglerLogger.debug(f\"Using cached selection from key: {key}\")\n\n    if not self._selections[key]:\n        WranglerLogger.debug(\n            f\"No links or nodes found for selection dict: \\n {selection_dict}\"\n        )\n        raise ValueError(\"Selection not successful.\")\n    return self._selections[key]\n
"},{"location":"api/#network_wrangler.transit.network.TransitRoadwayConsistencyError","title":"TransitRoadwayConsistencyError","text":"

Bases: Exception

Error raised when transit network is inconsistent with roadway network.

Source code in network_wrangler/transit/network.py
class TransitRoadwayConsistencyError(Exception):\n    \"\"\"Error raised when transit network is inconsistent with roadway network.\"\"\"\n\n    pass\n
"},{"location":"api/#parameters","title":"Parameters","text":"

Parameters for Network Wrangler.

"},{"location":"api/#network_wrangler.params.COPY_FROM_GP_TO_ML","title":"COPY_FROM_GP_TO_ML = ['ref', 'roadway', 'access', 'distance', 'bike_access', 'drive_access', 'walk_access', 'bus_only', 'rail_only'] module-attribute","text":"

(list(str)): list of attributes copied from GP lanes to access and egress dummy links.

"},{"location":"api/#network_wrangler.params.COPY_TO_ACCESS_EGRESS","title":"COPY_TO_ACCESS_EGRESS = ['ref', 'ML_access', 'ML_drive_access', 'ML_bus_only', 'ML_rail_only'] module-attribute","text":"

(list(str)): list of attributes that must be provided in managed lanes

"},{"location":"api/#network_wrangler.params.DEFAULT_CATEGORY","title":"DEFAULT_CATEGORY = 'any' module-attribute","text":"

Read sec / MB - WILL DEPEND ON SPECIFIC COMPUTER

"},{"location":"api/#network_wrangler.params.DEFAULT_MAX_SEARCH_BREADTH","title":"DEFAULT_MAX_SEARCH_BREADTH = 10 module-attribute","text":"

Union(int, float)): default penalty assigned for each degree of distance between a link and a link with the searched-for name when searching for paths between A and B node

"},{"location":"api/#network_wrangler.params.DEFAULT_SEARCH_BREADTH","title":"DEFAULT_SEARCH_BREADTH = 5 module-attribute","text":"

(int): default for maximum number of links traversed between links that match the searched name when searching for paths between A and B node

"},{"location":"api/#network_wrangler.params.DEFAULT_SEARCH_MODES","title":"DEFAULT_SEARCH_MODES = ['drive'] module-attribute","text":"

(int): default for initial number of links from name-based selection that are traveresed before trying another shortest path when searching for paths between A and B node

"},{"location":"api/#network_wrangler.params.DEFAULT_SP_WEIGHT_COL","title":"DEFAULT_SP_WEIGHT_COL = 'i' module-attribute","text":"

Default timespan for scoped values.

"},{"location":"api/#network_wrangler.params.DEFAULT_SP_WEIGHT_FACTOR","title":"DEFAULT_SP_WEIGHT_FACTOR = 100 module-attribute","text":"

(str): default column to use as weights in the shortest path calculations.

"},{"location":"api/#network_wrangler.params.DEFAULT_TIMESPAN","title":"DEFAULT_TIMESPAN = ['00:00', '24:00'] module-attribute","text":"

Default category for scoped values.

"},{"location":"api/#network_wrangler.params.EST_PD_READ_SPEED","title":"EST_PD_READ_SPEED = {'csv': 0.03, 'parquet': 0.005, 'geojson': 0.03, 'json': 0.15, 'txt': 0.04} module-attribute","text":"

(list(str)): list of attributes to copy from a general purpose lane to managed lane so long as a ML_ doesn\u2019t exist."},{"location":"api/#network_wrangler.params.MANAGED_LANES_LINK_ID_SCALAR","title":"MANAGED_LANES_LINK_ID_SCALAR = 1000000 module-attribute","text":"

scalar value added to the general purpose lanes\u2019 model_node_id when creating an associated node for a parallel managed lane

"},{"location":"api/#network_wrangler.params.MANAGED_LANES_REQUIRED_ATTRIBUTES","title":"MANAGED_LANES_REQUIRED_ATTRIBUTES = ['A', 'B', 'model_link_id'] module-attribute","text":"

scalar value added to the general purpose lanes\u2019 model_link_id when creating an associated link for a parallel managed lane

"},{"location":"api/#network_wrangler.params.LinksParams","title":"LinksParams dataclass","text":"

Parameters for RoadLinksTable.

Source code in network_wrangler/params.py
@dataclass\nclass LinksParams:\n    \"\"\"Parameters for RoadLinksTable.\"\"\"\n\n    primary_key: str = field(default=\"model_link_id\")\n    _addtl_unique_ids: list[str] = field(default_factory=lambda: [])\n    _addtl_explicit_ids: list[str] = field(default_factory=lambda: [\"osm_link_id\"])\n    from_node: str = field(default=\"A\")\n    to_node: str = field(default=\"B\")\n    fk_to_shape: str = field(default=\"shape_id\")\n    table_type: Literal[\"links\"] = field(default=\"links\")\n    source_file: str = field(default=None)\n    modes_to_network_link_variables: dict = field(\n        default_factory=lambda: MODES_TO_NETWORK_LINK_VARIABLES\n    )\n\n    @property\n    def idx_col(self):\n        \"\"\"Column to make the index of the table.\"\"\"\n        return self.primary_key + \"_idx\"\n\n    @property\n    def fks_to_nodes(self):\n        \"\"\"Foreign keys to nodes in the network.\"\"\"\n        return [self.from_node, self.to_node]\n\n    @property\n    def unique_ids(self) -> List[str]:\n        \"\"\"List of unique ids for the table.\"\"\"\n        _uids = self._addtl_unique_ids + [self.primary_key]\n        return list(set(_uids))\n\n    @property\n    def explicit_ids(self) -> List[str]:\n        \"\"\"List of columns that can be used to easily find specific row sin the table.\"\"\"\n        return list(set(self.unique_ids + self._addtl_explicit_ids))\n\n    @property\n    def display_cols(self) -> List[str]:\n        \"\"\"List of columns to display in the table.\"\"\"\n        _addtl = [\"lanes\"]\n        return list(set(self.explicit_ids + self.fks_to_nodes + _addtl))\n
"},{"location":"api/#network_wrangler.params.LinksParams.display_cols","title":"display_cols: List[str] property","text":"

List of columns to display in the table.

"},{"location":"api/#network_wrangler.params.LinksParams.explicit_ids","title":"explicit_ids: List[str] property","text":"

List of columns that can be used to easily find specific row sin the table.

"},{"location":"api/#network_wrangler.params.LinksParams.fks_to_nodes","title":"fks_to_nodes property","text":"

Foreign keys to nodes in the network.

"},{"location":"api/#network_wrangler.params.LinksParams.idx_col","title":"idx_col property","text":"

Column to make the index of the table.

"},{"location":"api/#network_wrangler.params.LinksParams.unique_ids","title":"unique_ids: List[str] property","text":"

List of unique ids for the table.

"},{"location":"api/#network_wrangler.params.NodesParams","title":"NodesParams dataclass","text":"

Parameters for RoadNodesTable.

Source code in network_wrangler/params.py
@dataclass\nclass NodesParams:\n    \"\"\"Parameters for RoadNodesTable.\"\"\"\n\n    primary_key: str = field(default=\"model_node_id\")\n    _addtl_unique_ids: list[str] = field(default_factory=lambda: [\"osm_node_id\"])\n    _addtl_explicit_ids: list[str] = field(default_factory=lambda: [])\n    source_file: str = field(default=None)\n    table_type: Literal[\"nodes\"] = field(default=\"nodes\")\n    x_field: str = field(default=\"X\")\n    y_field: str = field(default=\"Y\")\n\n    @property\n    def geometry_props(self) -> List[str]:\n        \"\"\"List of geometry properties.\"\"\"\n        return [self.x_field, self.y_field, \"geometry\"]\n\n    @property\n    def idx_col(self) -> str:\n        \"\"\"Column to make the index of the table.\"\"\"\n        return self.primary_key + \"_idx\"\n\n    @property\n    def unique_ids(self) -> List[str]:\n        \"\"\"List of unique ids for the table.\"\"\"\n        _uids = self._addtl_unique_ids + [self.primary_key]\n        return list(set(_uids))\n\n    @property\n    def explicit_ids(self) -> List[str]:\n        \"\"\"List of columns that can be used to easily find specific records the table.\"\"\"\n        _eids = self._addtl_unique_ids + self.unique_ids\n        return list(set(_eids))\n\n    @property\n    def display_cols(self) -> List[str]:\n        \"\"\"Columns to display in the table.\"\"\"\n        return self.explicit_ids\n
"},{"location":"api/#network_wrangler.params.NodesParams.display_cols","title":"display_cols: List[str] property","text":"

Columns to display in the table.

"},{"location":"api/#network_wrangler.params.NodesParams.explicit_ids","title":"explicit_ids: List[str] property","text":"

List of columns that can be used to easily find specific records the table.

"},{"location":"api/#network_wrangler.params.NodesParams.geometry_props","title":"geometry_props: List[str] property","text":"

List of geometry properties.

"},{"location":"api/#network_wrangler.params.NodesParams.idx_col","title":"idx_col: str property","text":"

Column to make the index of the table.

"},{"location":"api/#network_wrangler.params.NodesParams.unique_ids","title":"unique_ids: List[str] property","text":"

List of unique ids for the table.

"},{"location":"api/#network_wrangler.params.ShapesParams","title":"ShapesParams dataclass","text":"

Parameters for RoadShapesTable.

Source code in network_wrangler/params.py
@dataclass\nclass ShapesParams:\n    \"\"\"Parameters for RoadShapesTable.\"\"\"\n\n    primary_key: str = field(default=\"shape_id\")\n    _addtl_unique_ids: list[str] = field(default_factory=lambda: [])\n    table_type: Literal[\"shapes\"] = field(default=\"shapes\")\n    source_file: str = field(default=None)\n\n    @property\n    def idx_col(self) -> str:\n        \"\"\"Column to make the index of the table.\"\"\"\n        return self.primary_key + \"_idx\"\n\n    @property\n    def unique_ids(self) -> list[str]:\n        \"\"\"List of unique ids for the table.\"\"\"\n        return list(set(self._addtl_unique_ids.append(self.primary_key)))\n
"},{"location":"api/#network_wrangler.params.ShapesParams.idx_col","title":"idx_col: str property","text":"

Column to make the index of the table.

"},{"location":"api/#network_wrangler.params.ShapesParams.unique_ids","title":"unique_ids: list[str] property","text":"

List of unique ids for the table.

"},{"location":"api/#projects","title":"Projects","text":"

Projects are how you manipulate the networks. Each project type is defined in a module in the projects folder and accepts a RoadwayNetwork and or TransitNetwork as an input and returns the same objects (manipulated) as an output.

"},{"location":"api/#roadway","title":"Roadway","text":"

The roadway module contains submodules which define and extend the links, nodes, and shapes dataframe objects which within a RoadwayNetwork object as well as other classes and methods which support and extend the RoadwayNetwork class.

"},{"location":"api/#network-objects","title":"Network Objects","text":"

Submodules which define and extend the links, nodes, and shapes dataframe objects which within a RoadwayNetwork object. Includes classes which define:

"},{"location":"api/#links","title":"Links","text":"

:: network_wrangler.roadway.links.io :: network_wrangler.roadway.links.create :: network_wrangler.roadway.links.delete :: network_wrangler.roadway.links.edit :: network_wrangler.roadway.links.filters :: network_wrangler.roadway.links.geo :: network_wrangler.roadway.links.scopes :: network_wrangler.roadway.links.summary :: network_wrangler.roadway.links.validate :: network_wrangler.roadway.links.df_accessors

"},{"location":"api/#nodes","title":"Nodes","text":"

:: network_wrangler.roadway.nodes.io :: network_wrangler.roadway.nodes.create :: network_wrangler.roadway.nodes.delete :: network_wrangler.roadway.nodes.edit :: network_wrangler.roadway.nodes.filters :: network_wrangler.roadway.nodes

"},{"location":"api/#shapes","title":"Shapes","text":"

:: network_wrangler.roadway.shapes.io :: network_wrangler.roadway.shapes.create :: network_wrangler.roadway.shapes.edit :: network_wrangler.roadway.shapes.delete :: network_wrangler.roadway.shapes.filters :: network_wrangler.roadway.shapes.shapes

"},{"location":"api/#supporting-classes-methods-parameters","title":"Supporting Classes, Methods + Parameters","text":"

:: network_wrangler.roadway.segment :: network_wrangler.roadway.subnet :: network_wrangler.roadway.graph

"},{"location":"api/#utils-and-functions","title":"Utils and Functions","text":"

General utility functions used throughout package.

Helper functions for reading and writing files to reduce boilerplate.

Helper functions for data models.

Functions to help with network manipulations in dataframes.

Functions related to parsing and comparing time objects and series.

Internal function terminology for timespan scopes:

Utility functions for pandas data manipulation.

Helper geographic manipulation functions.

Dataframe accessors that allow functions to be called directly on the dataframe.

Logging utilities for Network Wrangler.

"},{"location":"api/#network_wrangler.utils.utils.check_one_or_one_superset_present","title":"check_one_or_one_superset_present(mixed_list, all_fields_present)","text":"

Checks that exactly one of the fields in mixed_list is in fields_present or one superset.

Source code in network_wrangler/utils/utils.py
def check_one_or_one_superset_present(\n    mixed_list: list[Union[str, list[str]]], all_fields_present: list[str]\n) -> bool:\n    \"\"\"Checks that exactly one of the fields in mixed_list is in fields_present or one superset.\"\"\"\n    normalized_list = normalize_to_lists(mixed_list)\n\n    list_items_present = [i for i in normalized_list if set(i).issubset(all_fields_present)]\n\n    if len(list_items_present) == 1:\n        return True\n\n    return list_elements_subset_of_single_element(list_items_present)\n
"},{"location":"api/#network_wrangler.utils.utils.combine_unique_unhashable_list","title":"combine_unique_unhashable_list(list1, list2)","text":"

Combines lists preserving order of first and removing duplicates.

Parameters:

Name Type Description Default list1 list

The first list.

required list2 list

The second list.

required

Returns:

Name Type Description list

A new list containing the elements from list1 followed by the

unique elements from list2.

Example

list1 = [1, 2, 3] list2 = [2, 3, 4, 5] combine_unique_unhashable_list(list1, list2) [1, 2, 3, 4, 5]

Source code in network_wrangler/utils/utils.py
def combine_unique_unhashable_list(list1: list, list2: list):\n    \"\"\"Combines lists preserving order of first and removing duplicates.\n\n    Args:\n        list1 (list): The first list.\n        list2 (list): The second list.\n\n    Returns:\n        list: A new list containing the elements from list1 followed by the\n        unique elements from list2.\n\n    Example:\n        >>> list1 = [1, 2, 3]\n        >>> list2 = [2, 3, 4, 5]\n        >>> combine_unique_unhashable_list(list1, list2)\n        [1, 2, 3, 4, 5]\n    \"\"\"\n    return [item for item in list1 if item not in list2] + list2\n
"},{"location":"api/#network_wrangler.utils.utils.delete_keys_from_dict","title":"delete_keys_from_dict(dictionary, keys)","text":"

Removes list of keys from potentially nested dictionary.

SOURCE: https://stackoverflow.com/questions/3405715/ User: @mseifert

Parameters:

Name Type Description Default dictionary dict

dictionary to remove keys from

required keys list

list of keys to remove

required Source code in network_wrangler/utils/utils.py
def delete_keys_from_dict(dictionary: dict, keys: list) -> dict:\n    \"\"\"Removes list of keys from potentially nested dictionary.\n\n    SOURCE: https://stackoverflow.com/questions/3405715/\n    User: @mseifert\n\n    Args:\n        dictionary: dictionary to remove keys from\n        keys: list of keys to remove\n\n    \"\"\"\n    keys_set = set(keys)  # Just an optimization for the \"if key in keys\" lookup.\n\n    modified_dict = {}\n    for key, value in dictionary.items():\n        if key not in keys_set:\n            if isinstance(value, dict):\n                modified_dict[key] = delete_keys_from_dict(value, keys_set)\n            else:\n                modified_dict[key] = (\n                    value  # or copy.deepcopy(value) if a copy is desired for non-dicts.\n                )\n    return modified_dict\n
"},{"location":"api/#network_wrangler.utils.utils.dict_to_hexkey","title":"dict_to_hexkey(d)","text":"

Converts a dictionary to a hexdigest of the sha1 hash of the dictionary.

Parameters:

Name Type Description Default d dict

dictionary to convert to string

required

Returns:

Name Type Description str str

hexdigest of the sha1 hash of dictionary

Source code in network_wrangler/utils/utils.py
def dict_to_hexkey(d: dict) -> str:\n    \"\"\"Converts a dictionary to a hexdigest of the sha1 hash of the dictionary.\n\n    Args:\n        d (dict): dictionary to convert to string\n\n    Returns:\n        str: hexdigest of the sha1 hash of dictionary\n    \"\"\"\n    return hashlib.sha1(str(d).encode()).hexdigest()\n
"},{"location":"api/#network_wrangler.utils.utils.findkeys","title":"findkeys(node, kv)","text":"

Returns values of all keys in various objects.

Adapted from arainchi on Stack Overflow: https://stackoverflow.com/questions/9807634/find-all-occurrences-of-a-key-in-nested-dictionaries-and-lists

Source code in network_wrangler/utils/utils.py
def findkeys(node, kv):\n    \"\"\"Returns values of all keys in various objects.\n\n    Adapted from arainchi on Stack Overflow:\n    https://stackoverflow.com/questions/9807634/find-all-occurrences-of-a-key-in-nested-dictionaries-and-lists\n    \"\"\"\n    if isinstance(node, list):\n        for i in node:\n            for x in findkeys(i, kv):\n                yield x\n    elif isinstance(node, dict):\n        if kv in node:\n            yield node[kv]\n        for j in node.values():\n            for x in findkeys(j, kv):\n                yield x\n
"},{"location":"api/#network_wrangler.utils.utils.generate_list_of_new_ids","title":"generate_list_of_new_ids(input_ids, existing_ids, id_scalar, iter_val=10, max_iter=1000)","text":"

Generates a list of new IDs based on the input IDs, existing IDs, and other parameters.

Parameters:

Name Type Description Default input_ids list[str]

The input IDs for which new IDs need to be generated.

required existing_ids Series

The existing IDs that should be avoided when generating new IDs.

required id_scalar int

The scalar value used to generate new IDs.

required iter_val int

The iteration value used in the generation process. Defaults to 10.

10 max_iter int

The maximum number of iterations allowed in the generation process. Defaults to 1000.

1000

Returns:

Type Description list[str]

list[str]: A list of new IDs generated based on the input IDs and other parameters.

Source code in network_wrangler/utils/utils.py
def generate_list_of_new_ids(\n    input_ids: list[str],\n    existing_ids: pd.Series,\n    id_scalar: int,\n    iter_val: int = 10,\n    max_iter: int = 1000,\n) -> list[str]:\n    \"\"\"Generates a list of new IDs based on the input IDs, existing IDs, and other parameters.\n\n    Args:\n        input_ids (list[str]): The input IDs for which new IDs need to be generated.\n        existing_ids (pd.Series): The existing IDs that should be avoided when generating new IDs.\n        id_scalar (int): The scalar value used to generate new IDs.\n        iter_val (int, optional): The iteration value used in the generation process.\n            Defaults to 10.\n        max_iter (int, optional): The maximum number of iterations allowed in the generation\n            process. Defaults to 1000.\n\n    Returns:\n        list[str]: A list of new IDs generated based on the input IDs and other parameters.\n    \"\"\"\n    # keep new_ids as list to preserve order\n    new_ids = []\n    existing_ids = set(existing_ids)\n    for i in input_ids:\n        new_id = generate_new_id(\n            i,\n            pd.Series(list(existing_ids)),\n            id_scalar,\n            iter_val=iter_val,\n            max_iter=max_iter,\n        )\n        new_ids.append(new_id)\n        existing_ids.add(new_id)\n    return new_ids\n
"},{"location":"api/#network_wrangler.utils.utils.generate_new_id","title":"generate_new_id(input_id, existing_ids, id_scalar, iter_val=10, max_iter=1000)","text":"

Generate a new ID that isn\u2019t in existing_ids.

TODO: check a registry rather than existing IDs

Parameters:

Name Type Description Default input_id str

id to use to generate new id.

required existing_ids Series

series that has existing IDs that should be unique

required id_scalar int

scalar value to initially use to create the new id.

required iter_val int

iteration value to use in the generation process.

10 max_iter int

maximum number of iterations allowed in the generation process.

1000 Source code in network_wrangler/utils/utils.py
def generate_new_id(\n    input_id: str,\n    existing_ids: pd.Series,\n    id_scalar: int,\n    iter_val: int = 10,\n    max_iter: int = 1000,\n) -> str:\n    \"\"\"Generate a new ID that isn't in existing_ids.\n\n    TODO: check a registry rather than existing IDs\n\n    Args:\n        input_id: id to use to generate new id.\n        existing_ids: series that has existing IDs that should be unique\n        id_scalar: scalar value to initially use to create the new id.\n        iter_val: iteration value to use in the generation process.\n        max_iter: maximum number of iterations allowed in the generation process.\n    \"\"\"\n    str_prefix, input_id, str_suffix = split_string_prefix_suffix_from_num(input_id)\n\n    for i in range(1, max_iter + 1):\n        new_id = f\"{str_prefix}{int(input_id) + id_scalar + (iter_val * i)}{str_suffix}\"\n        if new_id not in existing_ids.values:\n            return new_id\n        elif i == max_iter:\n            WranglerLogger.error(f\"Cannot generate new id within max iters of {max_iter}.\")\n            raise ValueError(\"Cannot create unique new id.\")\n
"},{"location":"api/#network_wrangler.utils.utils.get_overlapping_range","title":"get_overlapping_range(ranges)","text":"

Returns the overlapping range for a list of ranges or tuples defining ranges.

Parameters:

Name Type Description Default ranges list[Union[tuple[int], range]]

A list of ranges or tuples defining ranges.

required

Returns:

Type Description Union[None, range]

Union[None, range]: The overlapping range if found, otherwise None.

Example

ranges = [(1, 5), (3, 7), (6, 10)] get_overlapping_range(ranges) range(3, 5)

Source code in network_wrangler/utils/utils.py
def get_overlapping_range(ranges: list[Union[tuple[int], range]]) -> Union[None, range]:\n    \"\"\"Returns the overlapping range for a list of ranges or tuples defining ranges.\n\n    Args:\n        ranges (list[Union[tuple[int], range]]): A list of ranges or tuples defining ranges.\n\n    Returns:\n        Union[None, range]: The overlapping range if found, otherwise None.\n\n    Example:\n        >>> ranges = [(1, 5), (3, 7), (6, 10)]\n        >>> get_overlapping_range(ranges)\n        range(3, 5)\n\n    \"\"\"\n    _ranges = [r if isinstance(r, range) else range(r[0], r[1]) for r in ranges]\n\n    _overlap_start = max(r.start for r in _ranges)\n    _overlap_end = min(r.stop for r in _ranges)\n\n    if _overlap_start < _overlap_end:\n        return range(_overlap_start, _overlap_end)\n    else:\n        return None\n
"},{"location":"api/#network_wrangler.utils.utils.list_elements_subset_of_single_element","title":"list_elements_subset_of_single_element(mixed_list)","text":"

Find the first list in the mixed_list.

Source code in network_wrangler/utils/utils.py
def list_elements_subset_of_single_element(mixed_list: list[Union[str, list[str]]]) -> bool:\n    \"\"\"Find the first list in the mixed_list.\"\"\"\n    potential_supersets = []\n    for item in mixed_list:\n        if isinstance(item, list) and len(item) > 0:\n            potential_supersets.append(set(item))\n\n    # If no list is found, return False\n    if not potential_supersets:\n        return False\n\n    normalized_list = normalize_to_lists(mixed_list)\n\n    valid_supersets = []\n    for ss in potential_supersets:\n        if all(ss.issuperset(i) for i in normalized_list):\n            valid_supersets.append(ss)\n\n    return len(valid_supersets) == 1\n
"},{"location":"api/#network_wrangler.utils.utils.make_slug","title":"make_slug(text, delimiter='_')","text":"

Makes a slug from text.

Source code in network_wrangler/utils/utils.py
def make_slug(text: str, delimiter: str = \"_\") -> str:\n    \"\"\"Makes a slug from text.\"\"\"\n    text = re.sub(\"[,.;@#?!&$']+\", \"\", text.lower())\n    return re.sub(\"[\\ ]+\", delimiter, text)  # noqa: W605\n
"},{"location":"api/#network_wrangler.utils.utils.normalize_to_lists","title":"normalize_to_lists(mixed_list)","text":"

Turn a mixed list of scalars and lists into a list of lists.

Source code in network_wrangler/utils/utils.py
def normalize_to_lists(mixed_list: list[Union[str, list]]) -> list[list]:\n    \"\"\"Turn a mixed list of scalars and lists into a list of lists.\"\"\"\n    normalized_list = []\n    for item in mixed_list:\n        if isinstance(item, str):\n            normalized_list.append([item])\n        else:\n            normalized_list.append(item)\n    return normalized_list\n
"},{"location":"api/#network_wrangler.utils.utils.split_string_prefix_suffix_from_num","title":"split_string_prefix_suffix_from_num(input_string)","text":"

Split a string prefix and suffix from last number.

Parameters:

Name Type Description Default input_string str

The input string to be processed.

required

Returns:

Name Type Description tuple

A tuple containing the prefix (including preceding numbers), the last numeric part as an integer, and the suffix.

Notes

This function uses regular expressions to split a string into three parts: the prefix, the last numeric part, and the suffix. The prefix includes any preceding numbers, the last numeric part is converted to an integer, and the suffix includes any non-digit characters after the last numeric part.

Examples:

>>> split_string_prefix_suffix_from_num(\"abc123def456\")\n('abc', 123, 'def456')\n
>>> split_string_prefix_suffix_from_num(\"hello\")\n('hello', 0, '')\n
>>> split_string_prefix_suffix_from_num(\"123\")\n('', 123, '')\n
Source code in network_wrangler/utils/utils.py
def split_string_prefix_suffix_from_num(input_string: str):\n    \"\"\"Split a string prefix and suffix from *last* number.\n\n    Args:\n        input_string (str): The input string to be processed.\n\n    Returns:\n        tuple: A tuple containing the prefix (including preceding numbers),\n               the last numeric part as an integer, and the suffix.\n\n    Notes:\n        This function uses regular expressions to split a string into three parts:\n        the prefix, the last numeric part, and the suffix. The prefix includes any\n        preceding numbers, the last numeric part is converted to an integer, and\n        the suffix includes any non-digit characters after the last numeric part.\n\n    Examples:\n        >>> split_string_prefix_suffix_from_num(\"abc123def456\")\n        ('abc', 123, 'def456')\n\n        >>> split_string_prefix_suffix_from_num(\"hello\")\n        ('hello', 0, '')\n\n        >>> split_string_prefix_suffix_from_num(\"123\")\n        ('', 123, '')\n\n    \"\"\"\n    input_string = str(input_string)\n    pattern = re.compile(r\"(.*?)(\\d+)(\\D*)$\")\n    match = pattern.match(input_string)\n\n    if match:\n        # Extract the groups: prefix (including preceding numbers), last numeric part, suffix\n        prefix, numeric_part, suffix = match.groups()\n        # Convert the numeric part to an integer\n        num_variable = int(numeric_part)\n        return prefix, num_variable, suffix\n    else:\n        return input_string, 0, \"\"\n
"},{"location":"api/#network_wrangler.utils.utils.topological_sort","title":"topological_sort(adjacency_list, visited_list)","text":"

Topological sorting for Acyclic Directed Graph.

Parameters: - adjacency_list (dict): A dictionary representing the adjacency list of the graph. - visited_list (list): A list representing the visited status of each vertex in the graph.

Returns: - output_stack (list): A list containing the vertices in topological order.

This function performs a topological sort on an acyclic directed graph. It takes an adjacency list and a visited list as input. The adjacency list represents the connections between vertices in the graph, and the visited list keeps track of the visited status of each vertex.

The function uses a recursive helper function to perform the topological sort. It starts by iterating over each vertex in the visited list. For each unvisited vertex, it calls the helper function, which recursively visits all the neighbors of the vertex and adds them to the output stack in reverse order. Finally, it returns the output stack, which contains the vertices in topological order.

Source code in network_wrangler/utils/utils.py
def topological_sort(adjacency_list, visited_list):\n    \"\"\"Topological sorting for Acyclic Directed Graph.\n\n    Parameters:\n    - adjacency_list (dict): A dictionary representing the adjacency list of the graph.\n    - visited_list (list): A list representing the visited status of each vertex in the graph.\n\n    Returns:\n    - output_stack (list): A list containing the vertices in topological order.\n\n    This function performs a topological sort on an acyclic directed graph. It takes an adjacency\n    list and a visited list as input. The adjacency list represents the connections between\n    vertices in the graph, and the visited list keeps track of the visited status of each vertex.\n\n    The function uses a recursive helper function to perform the topological sort. It starts by\n    iterating over each vertex in the visited list. For each unvisited vertex, it calls the helper\n    function, which recursively visits all the neighbors of the vertex and adds them to the output\n    stack in reverse order. Finally, it returns the output stack, which contains the vertices in\n    topological order.\n    \"\"\"\n    output_stack = []\n\n    def _topology_sort_util(vertex):\n        if not visited_list[vertex]:\n            visited_list[vertex] = True\n            for neighbor in adjacency_list[vertex]:\n                _topology_sort_util(neighbor)\n            output_stack.insert(0, vertex)\n\n    for vertex in visited_list:\n        _topology_sort_util(vertex)\n\n    return output_stack\n
"},{"location":"api/#network_wrangler.utils.io.FileReadError","title":"FileReadError","text":"

Bases: Exception

Raised when there is an error reading a file.

Source code in network_wrangler/utils/io.py
class FileReadError(Exception):\n    \"\"\"Raised when there is an error reading a file.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.io.FileWriteError","title":"FileWriteError","text":"

Bases: Exception

Raised when there is an error writing a file.

Source code in network_wrangler/utils/io.py
class FileWriteError(Exception):\n    \"\"\"Raised when there is an error writing a file.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.io.read_table","title":"read_table(filename, sub_filename=None)","text":"

Read file and return a dataframe or geodataframe.

If filename is a zip file, will unzip to a temporary directory.

NOTE: if you are accessing multiple files from this zip file you will want to unzip it first and THEN access the table files so you don\u2019t create multiple duplicate unzipped tmp dirs.

Parameters:

Name Type Description Default filename Path

filename to load.

required sub_filename str

if the file is a zip, the sub_filename to load.

None Source code in network_wrangler/utils/io.py
def read_table(filename: Path, sub_filename: str = None) -> Union[pd.DataFrame, gpd.GeoDataFrame]:\n    \"\"\"Read file and return a dataframe or geodataframe.\n\n    If filename is a zip file, will unzip to a temporary directory.\n\n    NOTE:  if you are accessing multiple files from this zip file you will want to unzip it first\n    and THEN access the table files so you don't create multiple duplicate unzipped tmp dirs.\n\n    Args:\n        filename (Path): filename to load.\n        sub_filename: if the file is a zip, the sub_filename to load.\n\n    \"\"\"\n    filename = Path(filename)\n    if filename.suffix == \".zip\":\n        filename = unzip_file(filename) / sub_filename\n    WranglerLogger.debug(f\"Estimated read time: {_estimate_read_time_of_file(filename)}.\")\n    if any([x in filename.suffix for x in [\"geojson\", \"shp\", \"csv\"]]):\n        try:\n            return gpd.read_file(filename)\n        except:  # noqa: E722\n            if \"csv\" in filename.suffix:\n                return pd.read_csv(filename)\n            raise FileReadError\n    elif \"parquet\" in filename.suffix:\n        try:\n            return gpd.read_parquet(filename)\n        except:  # noqa: E722\n            return pd.read_parquet(filename)\n    elif \"json\" in filename.suffix:\n        with open(filename) as f:\n            return pd.read_json(f, orient=\"records\")\n    raise NotImplementedError(f\"Filetype {filename.suffix} not implemented.\")\n
"},{"location":"api/#network_wrangler.utils.io.unzip_file","title":"unzip_file(path)","text":"

Unzips a file to a temporary directory and returns the directory path.

Source code in network_wrangler/utils/io.py
def unzip_file(path: Path) -> Path:\n    \"\"\"Unzips a file to a temporary directory and returns the directory path.\"\"\"\n    tmpdir = tempfile.mkdtemp()\n    shutil.unpack_archive(path, tmpdir)\n\n    def finalize() -> None:\n        shutil.rmtree(tmpdir)\n\n    # Lazy cleanup\n    weakref.finalize(tmpdir, finalize)\n\n    return tmpdir\n
"},{"location":"api/#network_wrangler.utils.io.write_table","title":"write_table(df, filename, overwrite=False, **kwargs)","text":"

Write a dataframe or geodataframe to a file.

Parameters:

Name Type Description Default df DataFrame

dataframe to write.

required filename Path

filename to write to.

required overwrite bool

whether to overwrite the file if it exists. Defaults to False.

False kwargs

additional arguments to pass to the writer.

{} Source code in network_wrangler/utils/io.py
def write_table(\n    df: Union[pd.DataFrame, gpd.GeoDataFrame],\n    filename: Path,\n    overwrite: bool = False,\n    **kwargs,\n) -> None:\n    \"\"\"Write a dataframe or geodataframe to a file.\n\n    Args:\n        df (pd.DataFrame): dataframe to write.\n        filename (Path): filename to write to.\n        overwrite (bool): whether to overwrite the file if it exists. Defaults to False.\n        kwargs: additional arguments to pass to the writer.\n\n    \"\"\"\n    filename = Path(filename)\n    if filename.exists() and not overwrite:\n        raise FileExistsError(f\"File {filename} already exists and overwrite is False.\")\n\n    if filename.parent.is_dir() and not filename.parent.exists():\n        filename.parent.mkdir(parents=True)\n\n    WranglerLogger.debug(f\"Writing to {filename}.\")\n\n    if \"shp\" in filename.suffix:\n        df.to_file(filename, index=False, **kwargs)\n    elif \"parquet\" in filename.suffix:\n        df.to_parquet(filename, index=False, **kwargs)\n    elif \"csv\" in filename.suffix:\n        df.to_csv(filename, index=False, date_format=\"%H:%M:%S\", **kwargs)\n    elif \"txt\" in filename.suffix:\n        df.to_csv(filename, index=False, date_format=\"%H:%M:%S\", **kwargs)\n    elif \"geojson\" in filename.suffix:\n        # required due to issues with list-like columns\n        if isinstance(df, gpd.GeoDataFrame):\n            data = df.to_json(drop_id=True)\n        else:\n            data = df.to_json(orient=\"records\", index=False)\n        with open(filename, \"w\", encoding=\"utf-8\") as file:\n            file.write(data)\n    elif \"json\" in filename.suffix:\n        with open(filename, \"w\") as f:\n            f.write(df.to_json(orient=\"records\"))\n    else:\n        raise NotImplementedError(f\"Filetype {filename.suffix} not implemented.\")\n
"},{"location":"api/#network_wrangler.utils.models.DatamodelDataframeIncompatableError","title":"DatamodelDataframeIncompatableError","text":"

Bases: Exception

Raised when a data model and a dataframe are not compatable.

Source code in network_wrangler/utils/models.py
class DatamodelDataframeIncompatableError(Exception):\n    \"\"\"Raised when a data model and a dataframe are not compatable.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.models.coerce_extra_fields_to_type_in_df","title":"coerce_extra_fields_to_type_in_df(data, model, df)","text":"

Coerce extra fields in data that aren\u2019t specified in Pydantic model to the type in the df.

Note: will not coerce lists of submodels, etc.

Parameters:

Name Type Description Default data dict

The data to coerce.

required model BaseModel

The Pydantic model to validate the data against.

required df DataFrame

The DataFrame to coerce the data to.

required Source code in network_wrangler/utils/models.py
def coerce_extra_fields_to_type_in_df(\n    data: BaseModel, model: BaseModel, df: pd.DataFrame\n) -> BaseModel:\n    \"\"\"Coerce extra fields in data that aren't specified in Pydantic model to the type in the df.\n\n    Note: will not coerce lists of submodels, etc.\n\n    Args:\n        data (dict): The data to coerce.\n        model (BaseModel): The Pydantic model to validate the data against.\n        df (pd.DataFrame): The DataFrame to coerce the data to.\n    \"\"\"\n    out_data = copy.deepcopy(data)\n\n    # Coerce submodels\n    for field in submodel_fields_in_model(model, data):\n        out_data.__dict__[field] = coerce_extra_fields_to_type_in_df(\n            data.__dict__[field], model.__annotations__[field], df\n        )\n\n    for field in extra_attributes_undefined_in_model(data, model):\n        try:\n            v = coerce_val_to_df_types(field, data.model_extra[field], df)\n        except ValueError as e:\n            raise DatamodelDataframeIncompatableError(e)\n        out_data.model_extra[field] = v\n    return out_data\n
"},{"location":"api/#network_wrangler.utils.models.default_from_datamodel","title":"default_from_datamodel(data_model, field)","text":"

Returns default value from pandera data model for a given field name.

Source code in network_wrangler/utils/models.py
def default_from_datamodel(data_model: pa.DataFrameModel, field: str):\n    \"\"\"Returns default value from pandera data model for a given field name.\"\"\"\n    if field in data_model.__fields__:\n        return data_model.__fields__[field][1].default\n    return None\n
"},{"location":"api/#network_wrangler.utils.models.empty_df_from_datamodel","title":"empty_df_from_datamodel(model, crs=LAT_LON_CRS)","text":"

Create an empty DataFrame or GeoDataFrame with the specified columns.

Parameters:

Name Type Description Default model BaseModel

A pandera data model to create empty [Geo]DataFrame from.

required crs int

if schema has geometry, will use this as the geometry\u2019s crs. Defaults to LAT_LONG_CRS

LAT_LON_CRS Source code in network_wrangler/utils/models.py
def empty_df_from_datamodel(\n    model: DataFrameModel, crs: int = LAT_LON_CRS\n) -> Union[gpd.GeoDataFrame, pd.DataFrame]:\n    \"\"\"Create an empty DataFrame or GeoDataFrame with the specified columns.\n\n    Args:\n        model (BaseModel): A pandera data model to create empty [Geo]DataFrame from.\n        crs: if schema has geometry, will use this as the geometry's crs. Defaults to LAT_LONG_CRS\n    Returns:\n        An empty [Geo]DataFrame that validates to the specified model.\n    \"\"\"\n    schema = model.to_schema()\n    data = {col: [] for col in schema.columns.keys()}\n\n    if \"geometry\" in data:\n        return model(gpd.GeoDataFrame(data, crs=crs))\n\n    return model(pd.DataFrame(data))\n
"},{"location":"api/#network_wrangler.utils.models.extra_attributes_undefined_in_model","title":"extra_attributes_undefined_in_model(instance, model)","text":"

Find the extra attributes in a pydantic model that are not defined in the model.

Source code in network_wrangler/utils/models.py
def extra_attributes_undefined_in_model(instance: BaseModel, model: BaseModel) -> list:\n    \"\"\"Find the extra attributes in a pydantic model that are not defined in the model.\"\"\"\n    defined_fields = model.model_fields\n    all_attributes = list(instance.model_dump(exclude_none=True, by_alias=True).keys())\n    extra_attributes = [a for a in all_attributes if a not in defined_fields]\n    return extra_attributes\n
"},{"location":"api/#network_wrangler.utils.models.identify_model","title":"identify_model(data, models)","text":"

Identify the model that the input data conforms to.

Parameters:

Name Type Description Default data Union[DataFrame, dict]

The input data to identify.

required models list[DataFrameModel, BaseModel]

A list of models to validate the input data against.

required Source code in network_wrangler/utils/models.py
def identify_model(\n    data: Union[pd.DataFrame, dict], models: list[DataFrameModel, BaseModel]\n) -> Union[DataFrameModel, BaseModel]:\n    \"\"\"Identify the model that the input data conforms to.\n\n    Args:\n        data (Union[pd.DataFrame, dict]): The input data to identify.\n        models (list[DataFrameModel,BaseModel]): A list of models to validate the input\n          data against.\n    \"\"\"\n    for m in models:\n        try:\n            if isinstance(data, pd.DataFrame):\n                m.validate(data)\n            else:\n                m(**data)\n            return m\n        except ValidationError:\n            continue\n        except SchemaError:\n            continue\n\n    WranglerLogger.error(\n        f\"The input data isn't consistant with any provided data model.\\\n                         \\nInput data: {data}\\\n                         \\nData Models: {models}\"\n    )\n    raise ValueError(\"The input dictionary does not conform to any of the provided models.\")\n
"},{"location":"api/#network_wrangler.utils.models.submodel_fields_in_model","title":"submodel_fields_in_model(model, instance=None)","text":"

Find the fields in a pydantic model that are submodels.

Source code in network_wrangler/utils/models.py
def submodel_fields_in_model(model: BaseModel, instance: Optional[BaseModel] = None) -> list:\n    \"\"\"Find the fields in a pydantic model that are submodels.\"\"\"\n    types = get_type_hints(model)\n    model_type = Union[ModelMetaclass, BaseModel]\n    submodels = [f for f in model.model_fields if isinstance(types.get(f), model_type)]\n    if instance is not None:\n        defined = list(instance.model_dump(exclude_none=True, by_alias=True).keys())\n        return [f for f in submodels if f in defined]\n    return submodels\n
"},{"location":"api/#network_wrangler.utils.net.point_seq_to_links","title":"point_seq_to_links(point_seq_df, id_field, seq_field, node_id_field, from_field='A', to_field='B')","text":"

Translates a df with tidy data representing a sequence of points into links.

Parameters:

Name Type Description Default point_seq_df DataFrame

Dataframe with source breadcrumbs

required id_field str

Trace ID

required seq_field str

Order of breadcrumbs within ID_field

required node_id_field str

field denoting the node ID

required from_field str

Field to export from_field to. Defaults to \u201cA\u201d.

'A' to_field str

Field to export to_field to. Defaults to \u201cB\u201d.

'B'

Returns:

Type Description DataFrame

pd.DataFrame: Link records with id_field, from_field, to_field

Source code in network_wrangler/utils/net.py
def point_seq_to_links(\n    point_seq_df: DataFrame,\n    id_field: str,\n    seq_field: str,\n    node_id_field: str,\n    from_field: str = \"A\",\n    to_field: str = \"B\",\n) -> DataFrame:\n    \"\"\"Translates a df with tidy data representing a sequence of points into links.\n\n    Args:\n        point_seq_df (pd.DataFrame): Dataframe with source breadcrumbs\n        id_field (str): Trace ID\n        seq_field (str): Order of breadcrumbs within ID_field\n        node_id_field (str): field denoting the node ID\n        from_field (str, optional): Field to export from_field to. Defaults to \"A\".\n        to_field (str, optional): Field to export to_field to. Defaults to \"B\".\n\n    Returns:\n        pd.DataFrame: Link records with id_field, from_field, to_field\n    \"\"\"\n    point_seq_df = point_seq_df.sort_values(by=[id_field, seq_field])\n\n    links = point_seq_df.add_suffix(f\"_{from_field}\").join(\n        point_seq_df.shift(-1).add_suffix(f\"_{to_field}\")\n    )\n\n    links = links[links[f\"{id_field}_{to_field}\"] == links[f\"{id_field}_{from_field}\"]]\n\n    links = links.drop(columns=[f\"{id_field}_{to_field}\"])\n    links = links.rename(\n        columns={\n            f\"{id_field}_{from_field}\": id_field,\n            f\"{node_id_field}_{from_field}\": from_field,\n            f\"{node_id_field}_{to_field}\": to_field,\n        }\n    )\n\n    links = links.dropna(subset=[from_field, to_field])\n    # Since join with a shift() has some NAs, we need to recast the columns to int\n    _int_cols = [to_field, f\"{seq_field}_{to_field}\"]\n    links[_int_cols] = links[_int_cols].astype(int)\n    return links\n
"},{"location":"api/#network_wrangler.utils.time.convert_timespan_to_start_end_dt","title":"convert_timespan_to_start_end_dt(timespan_s)","text":"

Covert a timespan string [\u201812:00\u2019,\u201814:00] to start_time and end_time datetime cols in df.

Source code in network_wrangler/utils/time.py
def convert_timespan_to_start_end_dt(timespan_s: pd.Series) -> pd.DataFrame:\n    \"\"\"Covert a timespan string ['12:00','14:00] to start_time and end_time datetime cols in df.\"\"\"\n    start_time = timespan_s.apply(lambda x: str_to_time(x[0]))\n    end_time = timespan_s.apply(lambda x: str_to_time(x[1]))\n    return pd.DataFrame({\"start_time\": start_time, \"end_time\": end_time})\n
"},{"location":"api/#network_wrangler.utils.time.dt_contains","title":"dt_contains(timespan1, timespan2)","text":"

Check if one timespan inclusively contains another.

Parameters:

Name Type Description Default timespan1 list[time]

The first timespan represented as a list containing the start time and end time.

required timespan2 list[time]

The second timespan represented as a list containing the start time and end time.

required

Returns:

Name Type Description bool bool

True if the first timespan contains the second timespan, False otherwise.

Source code in network_wrangler/utils/time.py
@validate_call\ndef dt_contains(timespan1: list[datetime], timespan2: list[datetime]) -> bool:\n    \"\"\"Check if one timespan inclusively contains another.\n\n    Args:\n        timespan1 (list[time]): The first timespan represented as a list containing the start\n            time and end time.\n        timespan2 (list[time]): The second timespan represented as a list containing the start\n            time and end time.\n\n    Returns:\n        bool: True if the first timespan contains the second timespan, False otherwise.\n    \"\"\"\n    start_time_dt, end_time_dt = timespan1\n    start_time_dt2, end_time_dt2 = timespan2\n    return (start_time_dt <= start_time_dt2) and (end_time_dt >= end_time_dt2)\n
"},{"location":"api/#network_wrangler.utils.time.dt_list_overlaps","title":"dt_list_overlaps(timespans)","text":"

Check if any of the timespans overlap.

overlapping: a timespan that fully or partially overlaps a given timespan. This includes and all timespans where at least one minute overlap.

Source code in network_wrangler/utils/time.py
@validate_call\ndef dt_list_overlaps(timespans: list[list[datetime]]) -> bool:\n    \"\"\"Check if any of the timespans overlap.\n\n    `overlapping`: a timespan that fully or partially overlaps a given timespan.\n    This includes and all timespans where at least one minute overlap.\n    \"\"\"\n    if filter_dt_list_to_overlaps(timespans):\n        return True\n    return False\n
"},{"location":"api/#network_wrangler.utils.time.dt_overlap_duration","title":"dt_overlap_duration(timedelta1, timedelta2)","text":"

Check if two timespans overlap and return the amount of overlap.

Source code in network_wrangler/utils/time.py
@validate_call\ndef dt_overlap_duration(timedelta1: timedelta, timedelta2: timedelta) -> timedelta:\n    \"\"\"Check if two timespans overlap and return the amount of overlap.\"\"\"\n    overlap_start = max(timedelta1.start_time, timedelta2.start_time)\n    overlap_end = min(timedelta1.end_time, timedelta2.end_time)\n    overlap_duration = max(overlap_end - overlap_start, timedelta(0))\n    return overlap_duration\n
"},{"location":"api/#network_wrangler.utils.time.dt_overlaps","title":"dt_overlaps(timespan1, timespan2)","text":"

Check if two timespans overlap.

overlapping: a timespan that fully or partially overlaps a given timespan. This includes and all timespans where at least one minute overlap.

Source code in network_wrangler/utils/time.py
@validate_call\ndef dt_overlaps(timespan1: list[datetime], timespan2: list[datetime]) -> bool:\n    \"\"\"Check if two timespans overlap.\n\n    `overlapping`: a timespan that fully or partially overlaps a given timespan.\n    This includes and all timespans where at least one minute overlap.\n    \"\"\"\n    if (timespan1[0] < timespan2[1]) and (timespan2[0] < timespan1[1]):\n        return True\n    return False\n
"},{"location":"api/#network_wrangler.utils.time.duration_dt","title":"duration_dt(start_time_dt, end_time_dt)","text":"

Returns a datetime.timedelta object representing the duration of the timespan.

If end_time is less than start_time, the duration will assume that it crosses over midnight.

Source code in network_wrangler/utils/time.py
def duration_dt(start_time_dt: datetime, end_time_dt: datetime) -> timedelta:\n    \"\"\"Returns a datetime.timedelta object representing the duration of the timespan.\n\n    If end_time is less than start_time, the duration will assume that it crosses over\n    midnight.\n    \"\"\"\n    if end_time_dt < start_time_dt:\n        return timedelta(\n            hours=24 - start_time_dt.hour + end_time_dt.hour,\n            minutes=end_time_dt.minute - start_time_dt.minute,\n            seconds=end_time_dt.second - start_time_dt.second,\n        )\n    else:\n        return end_time_dt - start_time_dt\n
"},{"location":"api/#network_wrangler.utils.time.filter_df_to_overlapping_timespans","title":"filter_df_to_overlapping_timespans(orig_df, query_timespan, strict_match=False, min_overlap_minutes=0, keep_max_of_cols=['model_link_id'])","text":"

Filters dataframe for entries that have maximum overlap with the given query timespan.

Parameters:

Name Type Description Default orig_df DataFrame

dataframe to query timespans for with start_time and end_time.

required query_timespan list[TimeString]

TimespanString of format [\u2018HH:MM\u2019,\u2019HH:MM\u2019] to query orig_df for overlapping records.

required strict_match bool

boolean indicating if the returned df should only contain records that fully contain the query timespan. If set to True, min_overlap_minutes does not apply. Defaults to False.

False min_overlap_minutes int

minimum number of minutes the timespans need to overlap to keep. Defaults to 0.

0 keep_max_of_cols list[str]

list of fields to return the maximum value of overlap for. If None, will return all overlapping time periods. Defaults to ['model_link_id']

['model_link_id'] Source code in network_wrangler/utils/time.py
def filter_df_to_overlapping_timespans(\n    orig_df: pd.DataFrame,\n    query_timespan: list[TimeString],\n    strict_match: bool = False,\n    min_overlap_minutes: int = 0,\n    keep_max_of_cols: list[str] = [\"model_link_id\"],\n) -> pd.DataFrame:\n    \"\"\"Filters dataframe for entries that have maximum overlap with the given query timespan.\n\n    Args:\n        orig_df: dataframe to query timespans for with `start_time` and `end_time`.\n        query_timespan: TimespanString of format ['HH:MM','HH:MM'] to query orig_df for overlapping\n            records.\n        strict_match: boolean indicating if the returned df should only contain\n            records that fully contain the query timespan. If set to True, min_overlap_minutes\n            does not apply. Defaults to False.\n        min_overlap_minutes: minimum number of minutes the timespans need to overlap to keep.\n            Defaults to 0.\n        keep_max_of_cols: list of fields to return the maximum value of overlap for.  If None,\n            will return all overlapping time periods. Defaults to `['model_link_id']`\n    \"\"\"\n    q_start, q_end = str_to_time_list(query_timespan)\n\n    overlap_start = orig_df[\"start_time\"].combine(q_start, max)\n    overlap_end = orig_df[\"end_time\"].combine(q_end, min)\n    orig_df[\"overlap_duration\"] = (overlap_end - overlap_start).dt.total_seconds() / 60\n\n    if strict_match:\n        overlap_df = orig_df.loc[(orig_df.start_time <= q_start) & (orig_df.end_time >= q_end)]\n    else:\n        overlap_df = orig_df.loc[orig_df.overlap_duration > min_overlap_minutes]\n    WranglerLogger.debug(f\"overlap_df: \\n{overlap_df}\")\n    if keep_max_of_cols:\n        # keep only the maximum overlap\n        idx = overlap_df.groupby(keep_max_of_cols)[\"overlap_duration\"].idxmax()\n        overlap_df = overlap_df.loc[idx]\n    return overlap_df\n
"},{"location":"api/#network_wrangler.utils.time.filter_dt_list_to_overlaps","title":"filter_dt_list_to_overlaps(timespans)","text":"

Filter a list of timespans to only include those that overlap.

overlapping: a timespan that fully or partially overlaps a given timespan. This includes and all timespans where at least one minute overlap.

Source code in network_wrangler/utils/time.py
@validate_call\ndef filter_dt_list_to_overlaps(timespans: list[list[datetime]]) -> list[list[datetime]]:\n    \"\"\"Filter a list of timespans to only include those that overlap.\n\n    `overlapping`: a timespan that fully or partially overlaps a given timespan.\n    This includes and all timespans where at least one minute overlap.\n    \"\"\"\n    overlaps = []\n    for i in range(len(timespans)):\n        for j in range(i + 1, len(timespans)):\n            if dt_overlaps(timespans[i], timespans[j]):\n                overlaps += [timespans[i], timespans[j]]\n\n    # remove dupes\n    overlaps = list(map(list, set(map(tuple, overlaps))))\n    return overlaps\n
"},{"location":"api/#network_wrangler.utils.time.format_time","title":"format_time(seconds)","text":"

Formats seconds into a human-friendly string for log files.

Source code in network_wrangler/utils/time.py
def format_time(seconds):\n    \"\"\"Formats seconds into a human-friendly string for log files.\"\"\"\n    if seconds < 60:\n        return f\"{int(seconds)} seconds\"\n    elif seconds < 3600:\n        return f\"{int(seconds // 60)} minutes\"\n    else:\n        hours = int(seconds // 3600)\n        minutes = int((seconds % 3600) // 60)\n        return f\"{hours} hours and {minutes} minutes\"\n
"},{"location":"api/#network_wrangler.utils.time.str_to_time","title":"str_to_time(time_str)","text":"

Convert TimeString (HH:MM<:SS>) to datetime.time object.

Source code in network_wrangler/utils/time.py
def str_to_time(time_str: TimeString) -> datetime:\n    \"\"\"Convert TimeString (HH:MM<:SS>) to datetime.time object.\"\"\"\n    n_days = 0\n    # Convert to the next day\n    hours, min_sec = time_str.split(\":\", 1)\n    if int(hours) >= 24:\n        n_days, hour_of_day = divmod(int(hours), 24)\n        time_str = f\"{hour_of_day}:{min_sec}\"  # noqa E231\n\n    if len(time_str.split(\":\")) == 2:\n        base_time = datetime.strptime(time_str, \"%H:%M\")\n    elif len(time_str.split(\":\")) == 3:\n        base_time = datetime.strptime(time_str, \"%H:%M:%S\")\n    else:\n        from ..time import TimeFormatError\n\n        raise TimeFormatError(\"time strings must be in the format HH:MM or HH:MM:SS\")\n\n    total_time = base_time\n    if n_days > 0:\n        total_time = base_time + timedelta(days=n_days)\n    return total_time\n
"},{"location":"api/#network_wrangler.utils.time.str_to_time_list","title":"str_to_time_list(timespan)","text":"

Convert list of TimeStrings (HH:MM<:SS>) to list of datetime.time objects.

Source code in network_wrangler/utils/time.py
def str_to_time_list(timespan: list[TimeString]) -> list[list[datetime]]:\n    \"\"\"Convert list of TimeStrings (HH:MM<:SS>) to list of datetime.time objects.\"\"\"\n    return list(map(str_to_time, timespan))\n
"},{"location":"api/#network_wrangler.utils.time.timespan_str_list_to_dt","title":"timespan_str_list_to_dt(timespans)","text":"

Convert list of TimespanStrings to list of datetime.time objects.

Source code in network_wrangler/utils/time.py
def timespan_str_list_to_dt(timespans: list[TimespanString]) -> list[list[datetime]]:\n    \"\"\"Convert list of TimespanStrings to list of datetime.time objects.\"\"\"\n    [str_to_time_list(ts) for ts in timespans]\n
"},{"location":"api/#network_wrangler.utils.data.InvalidJoinFieldError","title":"InvalidJoinFieldError","text":"

Bases: Exception

Raised when the join field is not unique.

Source code in network_wrangler/utils/data.py
class InvalidJoinFieldError(Exception):\n    \"\"\"Raised when the join field is not unique.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.data.MissingPropertiesError","title":"MissingPropertiesError","text":"

Bases: Exception

Raised when properties are missing from the dataframe.

Source code in network_wrangler/utils/data.py
class MissingPropertiesError(Exception):\n    \"\"\"Raised when properties are missing from the dataframe.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.data.attach_parameters_to_df","title":"attach_parameters_to_df(df, params)","text":"

Attatch params as a dataframe attribute which will be copied with dataframe.

Source code in network_wrangler/utils/data.py
def attach_parameters_to_df(df: pd.DataFrame, params) -> pd.DataFrame:\n    \"\"\"Attatch params as a dataframe attribute which will be copied with dataframe.\"\"\"\n    if not df.__dict__.get(\"params\"):\n        df.__dict__[\"params\"] = params\n        # need to add params to _metadata in order to make sure it is copied.\n        # see: https://stackoverflow.com/questions/50372509/\n        df._metadata += [\"params\"]\n    # WranglerLogger.debug(f\"DFParams: {df.params}\")\n    return df\n
"},{"location":"api/#network_wrangler.utils.data.coerce_dict_to_df_types","title":"coerce_dict_to_df_types(d, df, skip_keys=[], return_skipped=False)","text":"

Coerce dictionary values to match the type of a dataframe columns matching dict keys.

Will also coerce a list of values.

Parameters:

Name Type Description Default d dict

dictionary to coerce with singleton or list values

required df DataFrame

dataframe to get types from

required skip_keys list

list of dict keys to skip. Defaults to []/

[] return_skipped bool

keep the uncoerced, skipped keys/vals in the resulting dict. Defaults to False.

False

Returns:

Name Type Description dict dict

dict with coerced types

Source code in network_wrangler/utils/data.py
def coerce_dict_to_df_types(\n    d: dict, df: pd.DataFrame, skip_keys: list = [], return_skipped: bool = False\n) -> dict:\n    \"\"\"Coerce dictionary values to match the type of a dataframe columns matching dict keys.\n\n    Will also coerce a list of values.\n\n    Args:\n        d (dict): dictionary to coerce with singleton or list values\n        df (pd.DataFrame): dataframe to get types from\n        skip_keys: list of dict keys to skip. Defaults to []/\n        return_skipped: keep the uncoerced, skipped keys/vals in the resulting dict.\n            Defaults to False.\n\n    Returns:\n        dict: dict with coerced types\n    \"\"\"\n    coerced_dict = {}\n    for k, vals in d.items():\n        if k in skip_keys:\n            if return_skipped:\n                coerced_dict[k] = vals\n            continue\n        if k not in df.columns:\n            raise ValueError(f\"Key {k} not in dataframe columns.\")\n        if pd.api.types.infer_dtype(df[k]) == \"integer\":\n            if isinstance(vals, list):\n                coerced_v = [int(float(v)) for v in vals]\n            else:\n                coerced_v = int(float(vals))\n        elif pd.api.types.infer_dtype(df[k]) == \"floating\":\n            if isinstance(vals, list):\n                coerced_v = [float(v) for v in vals]\n            else:\n                coerced_v = float(vals)\n        elif pd.api.types.infer_dtype(df[k]) == \"boolean\":\n            if isinstance(vals, list):\n                coerced_v = [bool(v) for v in vals]\n            else:\n                coerced_v = bool(vals)\n        else:\n            if isinstance(vals, list):\n                coerced_v = [str(v) for v in vals]\n            else:\n                coerced_v = str(vals)\n        coerced_dict[k] = coerced_v\n    return coerced_dict\n
"},{"location":"api/#network_wrangler.utils.data.coerce_gdf","title":"coerce_gdf(df, geometry=None, in_crs=LAT_LON_CRS)","text":"

Coerce a DataFrame to a GeoDataFrame, optionally with a new geometry.

Source code in network_wrangler/utils/data.py
def coerce_gdf(\n    df: pd.DataFrame, geometry: GeoSeries = None, in_crs: int = LAT_LON_CRS\n) -> GeoDataFrame:\n    \"\"\"Coerce a DataFrame to a GeoDataFrame, optionally with a new geometry.\"\"\"\n    if isinstance(df, GeoDataFrame):\n        if df.crs is None:\n            df.crs = in_crs\n        return df\n    p = None\n    if \"params\" in df.__dict__:\n        p = copy.deepcopy(df.params)\n\n    if \"geometry\" not in df and geometry is None:\n        raise ValueError(\"Must give geometry argument if don't have Geometry in dataframe\")\n\n    geometry = geometry if geometry is not None else df[\"geometry\"]\n    if not isinstance(geometry, GeoSeries):\n        try:\n            geometry = GeoSeries(geometry)\n        except:  # noqa: E722\n            geometry = geometry.apply(wkt.loads)\n    df = GeoDataFrame(df, geometry=geometry, crs=in_crs)\n\n    if p is not None:\n        # GeoPandas seems to lose parameters if we don't re-attach them.\n        df.__dict__[\"params\"] = p\n    return df\n
"},{"location":"api/#network_wrangler.utils.data.coerce_val_to_df_types","title":"coerce_val_to_df_types(field, val, df)","text":"

Coerce field value to match the type of a matching dataframe columns.

Parameters:

Name Type Description Default field str

field to lookup

required val Union[str, int, float, bool, list[Union[str, int, float, bool]]]

value or list of values to coerce

required df DataFrame

dataframe to get types from

required Source code in network_wrangler/utils/data.py
def coerce_val_to_df_types(\n    field: str,\n    val: Union[str, int, float, bool, list[Union[str, int, float, bool]]],\n    df: pd.DataFrame,\n) -> dict:\n    \"\"\"Coerce field value to match the type of a matching dataframe columns.\n\n    Args:\n        field: field to lookup\n        val: value or list of values to coerce\n        df (pd.DataFrame): dataframe to get types from\n\n    Returns: coerced value or list of values\n    \"\"\"\n    if field not in df.columns:\n        raise ValueError(f\"Field {field} not in dataframe columns.\")\n    if pd.api.types.infer_dtype(df[field]) == \"integer\":\n        if isinstance(val, list):\n            return [int(float(v)) for v in val]\n        return int(float(val))\n    elif pd.api.types.infer_dtype(df[field]) == \"floating\":\n        if isinstance(val, list):\n            return [float(v) for v in val]\n        return float(val)\n    elif pd.api.types.infer_dtype(df[field]) == \"boolean\":\n        if isinstance(val, list):\n            return [bool(v) for v in val]\n        return bool(val)\n    else:\n        if isinstance(val, list):\n            return [str(v) for v in val]\n        return str(val)\n
"},{"location":"api/#network_wrangler.utils.data.coerce_val_to_series_type","title":"coerce_val_to_series_type(val, s)","text":"

Coerces a value to match type of pandas series.

Will try not to fail so if you give it a value that can\u2019t convert to a number, it will return a string.

Parameters:

Name Type Description Default val

Any type of singleton value

required s Series

series to match the type to

required Source code in network_wrangler/utils/data.py
def coerce_val_to_series_type(val, s: pd.Series):\n    \"\"\"Coerces a value to match type of pandas series.\n\n    Will try not to fail so if you give it a value that can't convert to a number, it will\n    return a string.\n\n    Args:\n        val: Any type of singleton value\n        s (pd.Series): series to match the type to\n    \"\"\"\n    # WranglerLogger.debug(f\"Input val: {val} of type {type(val)} to match with series type \\\n    #    {pd.api.types.infer_dtype(s)}.\")\n    if pd.api.types.infer_dtype(s) in [\"integer\", \"floating\"]:\n        try:\n            v = float(val)\n        except:  # noqa: E722\n            v = str(val)\n    elif pd.api.types.infer_dtype(s) == \"boolean\":\n        v = bool(val)\n    else:\n        v = str(val)\n    # WranglerLogger.debug(f\"Return value: {v}\")\n    return v\n
"},{"location":"api/#network_wrangler.utils.data.compare_df_values","title":"compare_df_values(df1, df2, join_col=None, ignore=[], atol=1e-05)","text":"

Compare overlapping part of dataframes and returns where there are differences.

Source code in network_wrangler/utils/data.py
def compare_df_values(df1, df2, join_col: str = None, ignore: list[str] = [], atol=1e-5):\n    \"\"\"Compare overlapping part of dataframes and returns where there are differences.\"\"\"\n    comp_c = [\n        c\n        for c in df1.columns\n        if c in df2.columns and c not in ignore and not isinstance(df1[c], GeoSeries)\n    ]\n    if join_col is None:\n        comp_df = df1[comp_c].merge(\n            df2[comp_c],\n            how=\"inner\",\n            right_index=True,\n            left_index=True,\n            suffixes=[\"_a\", \"_b\"],\n        )\n    else:\n        comp_df = df1[comp_c].merge(df2[comp_c], how=\"inner\", on=join_col, suffixes=[\"_a\", \"_b\"])\n\n    # Filter columns by data type\n    numeric_cols = [col for col in comp_c if np.issubdtype(df1[col].dtype, np.number)]\n    ll_cols = list_like_columns(df1)\n    other_cols = [col for col in comp_c if col not in numeric_cols and col not in ll_cols]\n\n    # For numeric columns, use np.isclose\n    if numeric_cols:\n        numeric_a = comp_df[[f\"{col}_a\" for col in numeric_cols]]\n        numeric_b = comp_df[[f\"{col}_b\" for col in numeric_cols]]\n        is_close = np.isclose(numeric_a, numeric_b, atol=atol, equal_nan=True)\n        comp_df[numeric_cols] = ~is_close\n\n    if ll_cols:\n        for ll_c in ll_cols:\n            comp_df[ll_c] = diff_list_like_series(comp_df[ll_c + \"_a\"], comp_df[ll_c + \"_b\"])\n\n    # For non-numeric columns, use direct comparison\n    if other_cols:\n        for col in other_cols:\n            comp_df[col] = (comp_df[f\"{col}_a\"] != comp_df[f\"{col}_b\"]) & ~(\n                comp_df[f\"{col}_a\"].isna() & comp_df[f\"{col}_b\"].isna()\n            )\n\n    # Filter columns and rows where no differences\n    cols_w_diffs = [col for col in comp_c if comp_df[col].any()]\n    out_cols = [col for subcol in cols_w_diffs for col in (f\"{subcol}_a\", f\"{subcol}_b\", subcol)]\n    comp_df = comp_df[out_cols]\n    comp_df = comp_df.loc[comp_df[cols_w_diffs].any(axis=1)]\n\n    return comp_df\n
"},{"location":"api/#network_wrangler.utils.data.dict_fields_in_df","title":"dict_fields_in_df(d, df)","text":"

Check if all fields in dict are in dataframe.

Source code in network_wrangler/utils/data.py
def dict_fields_in_df(d: dict, df: pd.DataFrame) -> bool:\n    \"\"\"Check if all fields in dict are in dataframe.\"\"\"\n    missing_fields = [f for f in d.keys() if f not in df.columns]\n    if missing_fields:\n        WranglerLogger.error(f\"Fields in dictionary missing from dataframe: {missing_fields}.\")\n        raise ValueError(f\"Fields in dictionary missing from dataframe: {missing_fields}.\")\n    return True\n
"},{"location":"api/#network_wrangler.utils.data.dict_to_query","title":"dict_to_query(selection_dict)","text":"

Generates the query of from selection_dict.

Parameters:

Name Type Description Default selection_dict Mapping[str, Any]

selection dictionary

required

Returns:

Name Type Description _type_ str

Query value

Source code in network_wrangler/utils/data.py
def dict_to_query(\n    selection_dict: Mapping[str, Any],\n) -> str:\n    \"\"\"Generates the query of from selection_dict.\n\n    Args:\n        selection_dict: selection dictionary\n\n    Returns:\n        _type_: Query value\n    \"\"\"\n    WranglerLogger.debug(\"Building selection query\")\n\n    def _kv_to_query_part(k, v, _q_part=\"\"):\n        if isinstance(v, list):\n            _q_part += \"(\" + \" or \".join([_kv_to_query_part(k, i) for i in v]) + \")\"\n            return _q_part\n        if isinstance(v, str):\n            return k + '.str.contains(\"' + v + '\")'\n        else:\n            return k + \"==\" + str(v)\n\n    query = \"(\" + \" and \".join([_kv_to_query_part(k, v) for k, v in selection_dict.items()]) + \")\"\n    WranglerLogger.debug(f\"Selection query: \\n{query}\")\n    return query\n
"},{"location":"api/#network_wrangler.utils.data.diff_dfs","title":"diff_dfs(df1, df2, ignore=[])","text":"

Compare two dataframes and log differences.

Source code in network_wrangler/utils/data.py
def diff_dfs(df1, df2, ignore: list[str] = []) -> bool:\n    \"\"\"Compare two dataframes and log differences.\"\"\"\n    diff = False\n    if set(df1.columns) != set(df2.columns):\n        WranglerLogger.warning(\n            f\" Columns are different 1vs2 \\n    {set(df1.columns) ^ set(df2.columns)}\"\n        )\n        common_cols = [col for col in df1.columns if col in df2.columns]\n        df1 = df1[common_cols]\n        df2 = df2[common_cols]\n        diff = True\n\n    cols_to_compare = [col for col in df1.columns if col not in ignore]\n    df1 = df1[cols_to_compare]\n    df2 = df2[cols_to_compare]\n\n    if len(df1) != len(df2):\n        WranglerLogger.warning(\n            f\" Length is different /\" f\"DF1: {len(df1)} vs /\" f\"DF2: {len(df2)}\\n /\"\n        )\n        diff = True\n\n    diff_df = compare_df_values(df1, df2)\n\n    if not diff_df.empty:\n        WranglerLogger.error(f\"!!! Differences dfs: \\n{diff_df}\")\n        return True\n\n    if not diff:\n        WranglerLogger.info(\"...no differences in df found.\")\n    return diff\n
"},{"location":"api/#network_wrangler.utils.data.diff_list_like_series","title":"diff_list_like_series(s1, s2)","text":"

Compare two series that contain list-like items as strings.

Source code in network_wrangler/utils/data.py
def diff_list_like_series(s1, s2) -> bool:\n    \"\"\"Compare two series that contain list-like items as strings.\"\"\"\n    diff_df = pd.concat([s1, s2], axis=1, keys=[\"s1\", \"s2\"])\n    diff_df[\"diff\"] = diff_df.apply(lambda x: str(x[\"s1\"]) != str(x[\"s2\"]), axis=1)\n\n    if diff_df[\"diff\"].any():\n        WranglerLogger.info(\"List-Like differences:\")\n        WranglerLogger.info(diff_df)\n        return True\n    return False\n
"},{"location":"api/#network_wrangler.utils.data.fk_in_pk","title":"fk_in_pk(pk, fk, ignore_nan=True)","text":"

Check if all foreign keys are in the primary keys, optionally ignoring NaN.

Source code in network_wrangler/utils/data.py
def fk_in_pk(\n    pk: Union[pd.Series, list], fk: Union[pd.Series, list], ignore_nan: bool = True\n) -> Tuple[bool, list]:\n    \"\"\"Check if all foreign keys are in the primary keys, optionally ignoring NaN.\"\"\"\n    if isinstance(fk, list):\n        fk = pd.Series(fk)\n\n    if ignore_nan:\n        fk = fk.dropna()\n\n    missing_flag = ~fk.isin(pk)\n\n    if missing_flag.any():\n        WranglerLogger.warning(\n            f\"Following keys referenced in {fk.name} but missing in\\\n            primary key table: \\n{fk[missing_flag]} \"\n        )\n        return False, fk[missing_flag].tolist()\n\n    return True, []\n
"},{"location":"api/#network_wrangler.utils.data.list_like_columns","title":"list_like_columns(df, item_type=None)","text":"

Find columns in a dataframe that contain list-like items that can\u2019t be json-serialized.

Parameters:

Name Type Description Default df

dataframe to check

required item_type type

if not None, will only return columns where all items are of this type by checking only the first item in the column. Defaults to None.

None Source code in network_wrangler/utils/data.py
def list_like_columns(df, item_type: type = None) -> list[str]:\n    \"\"\"Find columns in a dataframe that contain list-like items that can't be json-serialized.\n\n    Args:\n        df: dataframe to check\n        item_type: if not None, will only return columns where all items are of this type by\n            checking **only** the first item in the column.  Defaults to None.\n    \"\"\"\n    list_like_columns = []\n\n    for column in df.columns:\n        if df[column].apply(lambda x: isinstance(x, (list, ndarray))).any():\n            if item_type is not None:\n                if not isinstance(df[column].iloc[0], item_type):\n                    continue\n            list_like_columns.append(column)\n    return list_like_columns\n
"},{"location":"api/#network_wrangler.utils.data.segment_data_by_selection","title":"segment_data_by_selection(item_list, data, field=None, end_val=0)","text":"

Segment a dataframe or series into before, middle, and end segments based on item_list.

selected segment = everything from the first to last item in item_list inclusive of the first and last items. Before segment = everything before After segment = everything after

Parameters:

Name Type Description Default item_list list

List of items to segment data by. If longer than two, will only use the first and last items.

required data Union[Series, DataFrame]

Data to segment into before, middle, and after.

required field str

If a dataframe, specifies which field to reference. Defaults to None.

None end_val int

Notation for util the end or from the begining. Defaults to 0.

0

Raises:

Type Description ValueError

If item list isn\u2019t found in data in correct order.

Returns:

Name Type Description tuple tuple[Union[Series, list, DataFrame]]

data broken out by beofore, selected segment, and after.

Source code in network_wrangler/utils/data.py
def segment_data_by_selection(\n    item_list: list,\n    data: Union[list, pd.DataFrame, pd.Series],\n    field: str = None,\n    end_val=0,\n) -> tuple[Union[pd.Series, list, pd.DataFrame]]:\n    \"\"\"Segment a dataframe or series into before, middle, and end segments based on item_list.\n\n    selected segment = everything from the first to last item in item_list inclusive of the first\n        and last items.\n    Before segment = everything before\n    After segment = everything after\n\n\n    Args:\n        item_list (list): List of items to segment data by. If longer than two, will only\n            use the first and last items.\n        data (Union[pd.Series, pd.DataFrame]): Data to segment into before, middle, and after.\n        field (str, optional): If a dataframe, specifies which field to reference.\n            Defaults to None.\n        end_val (int, optional): Notation for util the end or from the begining. Defaults to 0.\n\n    Raises:\n        ValueError: If item list isn't found in data in correct order.\n\n    Returns:\n        tuple: data broken out by beofore, selected segment, and after.\n    \"\"\"\n    ref_data = data\n    if isinstance(data, pd.DataFrame):\n        ref_data = data[field].tolist()\n    elif isinstance(data, pd.Series):\n        ref_data = data.tolist()\n\n    # ------- Replace \"to the end\" indicators with first or last value --------\n    start_item, end_item = item_list[0], item_list[-1]\n    if start_item == end_val:\n        start_item = ref_data[0]\n    if end_item == end_val:\n        end_item = ref_data[-1]\n\n    # --------Find the start and end indices -----------------------------------\n    start_idxs = list(set([i for i, item in enumerate(ref_data) if item == start_item]))\n    if not start_idxs:\n        raise ValueError(f\"Segment start item: {start_item} not in data.\")\n    if len(start_idxs) > 1:\n        WranglerLogger.warning(\n            f\"Found multiple starting locations for data segment: {start_item}.\\\n                                Choosing first \u2013 largest segment being selected.\"\n        )\n    start_idx = min(start_idxs)\n\n    # find the end node starting from the start index.\n    end_idxs = [i + start_idx for i, item in enumerate(ref_data[start_idx:]) if item == end_item]\n    # WranglerLogger.debug(f\"End indexes: {end_idxs}\")\n    if not end_idxs:\n        raise ValueError(f\"Segment end item: {end_item} not in data after starting idx.\")\n    if len(end_idxs) > 1:\n        WranglerLogger.warning(\n            f\"Found multiple ending locations for data segment: {end_item}.\\\n                                Choosing last \u2013 largest segment being selected.\"\n        )\n    end_idx = max(end_idxs) + 1\n    # WranglerLogger.debug(\n    # f\"Segmenting data fr {start_item} idx:{start_idx} to {end_item} idx:{end_idx}.\\n{ref_data}\")\n    # -------- Extract the segments --------------------------------------------\n    if isinstance(data, pd.DataFrame):\n        before_segment = data.iloc[:start_idx]\n        selected_segment = data.iloc[start_idx:end_idx]\n        after_segment = data.iloc[end_idx:]\n    else:\n        before_segment = data[:start_idx]\n        selected_segment = data[start_idx:end_idx]\n        after_segment = data[end_idx:]\n\n    if isinstance(data, pd.Series) or isinstance(data, pd.DataFrame):\n        before_segment = before_segment.reset_index(drop=True)\n        selected_segment = selected_segment.reset_index(drop=True)\n        after_segment = after_segment.reset_index(drop=True)\n\n    # WranglerLogger.debug(f\"Segmented data into before, selected, and after.\\n \\\n    #    Before:\\n{before_segment}\\nSelected:\\n{selected_segment}\\nAfter:\\n{after_segment}\")\n\n    return before_segment, selected_segment, after_segment\n
"},{"location":"api/#network_wrangler.utils.data.segment_data_by_selection_min_overlap","title":"segment_data_by_selection_min_overlap(selection_list, data, field, replacements_list, end_val=0)","text":"

Segments data based on item_list reducing overlap with replacement list.

selected segment: everything from the first to last item in item_list inclusive of the first and last items but not if first and last items overlap with replacement list. Before segment = everything before After segment = everything after

Example: selection_list = [2,5] data = pd.DataFrame({\u201ci\u201d:[1,2,3,4,5,6]}) field = \u201ci\u201d replacements_list = [2,22,33]

Returns:

Type Description list

[22,33]

tuple[Union[Series, list, DataFrame]]

[1], [2,3,4,5], [6]

Parameters:

Name Type Description Default selection_list list

List of items to segment data by. If longer than two, will only use the first and last items.

required data Union[Series, DataFrame]

Data to segment into before, middle, and after.

required field str

Specifies which field to reference.

required replacements_list list

List of items to eventually replace the selected segment with.

required end_val int

Notation for util the end or from the begining. Defaults to 0.

0

tuple containing:

Type Description list tuple[Union[Series, list, DataFrame]] Source code in network_wrangler/utils/data.py
def segment_data_by_selection_min_overlap(\n    selection_list: list,\n    data: pd.DataFrame,\n    field: str,\n    replacements_list: list,\n    end_val=0,\n) -> tuple[list, tuple[Union[pd.Series, list, pd.DataFrame]]]:\n    \"\"\"Segments data based on item_list reducing overlap with replacement list.\n\n    *selected segment*: everything from the first to last item in item_list inclusive of the first\n        and last items but not if first and last items overlap with replacement list.\n    Before segment = everything before\n    After segment = everything after\n\n    Example:\n    selection_list = [2,5]\n    data = pd.DataFrame({\"i\":[1,2,3,4,5,6]})\n    field = \"i\"\n    replacements_list = [2,22,33]\n\n    returns:\n        [22,33]\n        [1], [2,3,4,5], [6]\n\n    Args:\n        selection_list (list): List of items to segment data by. If longer than two, will only\n            use the first and last items.\n        data (Union[pd.Series, pd.DataFrame]): Data to segment into before, middle, and after.\n        field (str): Specifies which field to reference.\n        replacements_list (list): List of items to eventually replace the selected segment with.\n        end_val (int, optional): Notation for util the end or from the begining. Defaults to 0.\n\n    Returns: tuple containing:\n        - updated replacement_list\n        - tuple of before, selected segment, and after data\n    \"\"\"\n    before_segment, segment_df, after_segment = segment_data_by_selection(\n        selection_list, data, field=field, end_val=end_val\n    )\n\n    if replacements_list[0] == segment_df[field].iat[0]:\n        # move first item from selected segment to the before_segment df\n        replacements_list = replacements_list[1:]\n        before_segment = pd.concat(\n            [before_segment, segment_df.iloc[:1]], ignore_index=True, sort=False\n        )\n        segment_df = segment_df.iloc[1:]\n        WranglerLogger.debug(f\"item start overlaps with replacement. Repl: {replacements_list}\")\n    if replacements_list and replacements_list[-1] == data[field].iat[-1]:\n        # move last item from selected segment to the after_segment df\n        replacements_list = replacements_list[:-1]\n        after_segment = pd.concat([data.iloc[-1:], after_segment], ignore_index=True, sort=False)\n        segment_df = segment_df.iloc[:-1]\n        WranglerLogger.debug(f\"item end overlaps with replacement. Repl: {replacements_list}\")\n\n    return replacements_list, (before_segment, segment_df, after_segment)\n
"},{"location":"api/#network_wrangler.utils.data.update_df_by_col_value","title":"update_df_by_col_value(destination_df, source_df, join_col, properties=None, fail_if_missing=True)","text":"

Updates destination_df with ALL values in source_df for specified props with same join_col.

Source_df can contain a subset of IDs of destination_df. If fail_if_missing is true, destination_df must have all the IDS in source DF - ensuring all source_df values are contained in resulting df.

>> destination_df\ntrip_id  property1  property2\n1         10      100\n2         20      200\n3         30      300\n4         40      400\n\n>> source_df\ntrip_id  property1  property2\n2         25      250\n3         35      350\n\n>> updated_df\ntrip_id  property1  property2\n0        1       10      100\n1        2       25      250\n2        3       35      350\n3        4       40      400\n

Parameters:

Name Type Description Default destination_df DataFrame

Dataframe to modify.

required source_df DataFrame

Dataframe with updated columns

required join_col str

column to join on

required properties list[str]

List of properties to use. If None, will default to all in source_df.

None fail_if_missing bool

If True, will raise an error if there are missing IDs in destination_df that exist in source_df.

True Source code in network_wrangler/utils/data.py
def update_df_by_col_value(\n    destination_df: pd.DataFrame,\n    source_df: pd.DataFrame,\n    join_col: str,\n    properties: list[str] = None,\n    fail_if_missing: bool = True,\n) -> pd.DataFrame:\n    \"\"\"Updates destination_df with ALL values in source_df for specified props with same join_col.\n\n    Source_df can contain a subset of IDs of destination_df.\n    If fail_if_missing is true, destination_df must have all\n    the IDS in source DF - ensuring all source_df values are contained in resulting df.\n\n    ```\n    >> destination_df\n    trip_id  property1  property2\n    1         10      100\n    2         20      200\n    3         30      300\n    4         40      400\n\n    >> source_df\n    trip_id  property1  property2\n    2         25      250\n    3         35      350\n\n    >> updated_df\n    trip_id  property1  property2\n    0        1       10      100\n    1        2       25      250\n    2        3       35      350\n    3        4       40      400\n    ```\n\n    Args:\n        destination_df (pd.DataFrame): Dataframe to modify.\n        source_df (pd.DataFrame): Dataframe with updated columns\n        join_col (str): column to join on\n        properties (list[str]): List of properties to use. If None, will default to all\n            in source_df.\n        fail_if_missing (bool): If True, will raise an error if there are missing IDs in\n            destination_df that exist in source_df.\n    \"\"\"\n    # 1. Identify which properties should be updated; and if they exist in both DFs.\n    if properties is None:\n        properties = [\n            c for c in source_df.columns if c in destination_df.columns and c != join_col\n        ]\n    else:\n        _dest_miss = _df_missing_cols(destination_df, properties + [join_col])\n        if _dest_miss:\n            raise MissingPropertiesError(f\"Properties missing from destination_df: {_dest_miss}\")\n        _source_miss = _df_missing_cols(source_df, properties + [join_col])\n        if _source_miss:\n            raise MissingPropertiesError(f\"Properties missing from source_df: {_source_miss}\")\n\n    # 2. Identify if there are IDs missing from destintation_df that exist in source_df\n    if fail_if_missing:\n        missing_ids = set(source_df[join_col]) - set(destination_df[join_col])\n        if missing_ids:\n            raise InvalidJoinFieldError(f\"IDs missing from source_df: \\n{missing_ids}\")\n\n    WranglerLogger.debug(f\"Updating properties for {len(source_df)} records: {properties}.\")\n\n    if not source_df[join_col].is_unique:\n        InvalidJoinFieldError(\"Can't join from source_df when join_col: {join_col} is not unique.\")\n\n    if not destination_df[join_col].is_unique:\n        return _update_props_from_one_to_many(destination_df, source_df, join_col, properties)\n\n    return _update_props_for_common_idx(destination_df, source_df, join_col, properties)\n
"},{"location":"api/#network_wrangler.utils.data.validate_existing_value_in_df","title":"validate_existing_value_in_df(df, idx, field, expected_value)","text":"

Validate if df[field]==expected_value for all indices in idx.

Source code in network_wrangler/utils/data.py
def validate_existing_value_in_df(df: pd.DataFrame, idx: list[int], field: str, expected_value):\n    \"\"\"Validate if df[field]==expected_value for all indices in idx.\"\"\"\n    if field not in df.columns:\n        WranglerLogger.warning(f\"!! {field} Not an existing field.\")\n        return False\n    if not df.loc[idx, field].eq(expected_value).all():\n        WranglerLogger.warning(\n            f\"Existing value defined for {field} in project card \\\n            does not match the value in the selection links. \\n\\\n            Specified Existing: {expected_value}\\n\\\n            Actual Existing: \\n {df.loc[idx, field]}.\"\n        )\n        return False\n    return True\n
"},{"location":"api/#network_wrangler.utils.geo.InvalidCRSError","title":"InvalidCRSError","text":"

Bases: Exception

Raised when a point is not valid for a given coordinate reference system.

Source code in network_wrangler/utils/geo.py
class InvalidCRSError(Exception):\n    \"\"\"Raised when a point is not valid for a given coordinate reference system.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.geo.MissingNodesError","title":"MissingNodesError","text":"

Bases: Exception

Raised when referenced nodes are missing from the network.

Source code in network_wrangler/utils/geo.py
class MissingNodesError(Exception):\n    \"\"\"Raised when referenced nodes are missing from the network.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.geo.check_point_valid_for_crs","title":"check_point_valid_for_crs(point, crs)","text":"

Check if a point is valid for a given coordinate reference system.

Parameters:

Name Type Description Default point Point

Shapely Point

required crs int

coordinate reference system in ESPG code

required Source code in network_wrangler/utils/geo.py
def check_point_valid_for_crs(point: Point, crs: int):\n    \"\"\"Check if a point is valid for a given coordinate reference system.\n\n    Args:\n        point: Shapely Point\n        crs: coordinate reference system in ESPG code\n\n    raises: InvalidCRSError if point is not valid for the given crs\n    \"\"\"\n    crs = CRS.from_user_input(crs)\n    minx, miny, maxx, maxy = crs.area_of_use.bounds\n    ok_bounds = True\n    if not minx <= point.x <= maxx:\n        WranglerLogger.error(f\"Invalid X coordinate for CRS {crs}: {point.x}\")\n        ok_bounds = False\n    if not miny <= point.y <= maxy:\n        WranglerLogger.error(f\"Invalid Y coordinate for CRS {crs}: {point.y}\")\n        ok_bounds = False\n\n    if not ok_bounds:\n        raise InvalidCRSError(f\"Invalid coordinate for CRS {crs}: {point.x}, {point.y}\")\n
"},{"location":"api/#network_wrangler.utils.geo.get_bearing","title":"get_bearing(lat1, lon1, lat2, lon2)","text":"

Calculate the bearing (forward azimuth) b/w the two points.

returns: bearing in radians

Source code in network_wrangler/utils/geo.py
def get_bearing(lat1, lon1, lat2, lon2):\n    \"\"\"Calculate the bearing (forward azimuth) b/w the two points.\n\n    returns: bearing in radians\n    \"\"\"\n    # bearing in degrees\n    brng = Geodesic.WGS84.Inverse(lat1, lon1, lat2, lon2)[\"azi1\"]\n\n    # convert bearing to radians\n    brng = math.radians(brng)\n\n    return brng\n
"},{"location":"api/#network_wrangler.utils.geo.get_bounding_polygon","title":"get_bounding_polygon(boundary_geocode=None, boundary_file=None, boundary_gdf=None, crs=LAT_LON_CRS)","text":"

Get the bounding polygon for a given boundary first prioritizing the.

This function retrieves the bounding polygon for a given boundary. The boundary can be provided as a GeoDataFrame, a geocode string or dictionary, or a boundary file. The resulting polygon geometry is returned as a GeoSeries.

Parameters:

Name Type Description Default boundary_geocode Union[str, dict]

A geocode string or dictionary representing the boundary. Defaults to None.

None boundary_file Union[str, Path]

A path to the boundary file. Only used if boundary_geocode is None. Defaults to None.

None boundary_gdf GeoDataFrame

A GeoDataFrame representing the boundary. Only used if boundary_geocode and boundary_file are None. Defaults to None.

None crs int

The coordinate reference system (CRS) code. Defaults to 4326 (WGS84).

LAT_LON_CRS

Returns:

Type Description GeoSeries

gpd.GeoSeries: The polygon geometry representing the bounding polygon.

Source code in network_wrangler/utils/geo.py
def get_bounding_polygon(\n    boundary_geocode: Optional[Union[str, dict]] = None,\n    boundary_file: Optional[Union[str, Path]] = None,\n    boundary_gdf: Optional[gpd.GeoDataFrame] = None,\n    crs: int = LAT_LON_CRS,  # WGS84\n) -> gpd.GeoSeries:\n    \"\"\"Get the bounding polygon for a given boundary first prioritizing the.\n\n    This function retrieves the bounding polygon for a given boundary. The boundary can be provided\n    as a GeoDataFrame, a geocode string or dictionary, or a boundary file. The resulting polygon\n    geometry is returned as a GeoSeries.\n\n    Args:\n        boundary_geocode (Union[str, dict], optional): A geocode string or dictionary\n            representing the boundary. Defaults to None.\n        boundary_file (Union[str, Path], optional): A path to the boundary file. Only used if\n            boundary_geocode is None. Defaults to None.\n        boundary_gdf (gpd.GeoDataFrame, optional): A GeoDataFrame representing the boundary.\n            Only used if boundary_geocode and boundary_file are None. Defaults to None.\n        crs (int, optional): The coordinate reference system (CRS) code. Defaults to 4326 (WGS84).\n\n    Returns:\n        gpd.GeoSeries: The polygon geometry representing the bounding polygon.\n    \"\"\"\n    import osmnx as ox\n\n    if sum(x is not None for x in [boundary_gdf, boundary_geocode, boundary_file]) != 1:\n        raise ValueError(\n            \"Exacly one of boundary_gdf, boundary_geocode, or boundary_shp must \\\n                         be provided\"\n        )\n\n    OK_BOUNDARY_SUFF = [\".shp\", \".geojson\", \".parquet\"]\n\n    if boundary_geocode is not None:\n        boundary_gdf = ox.geocode_to_gdf(boundary_geocode)\n    if boundary_file is not None:\n        boundary_file = Path(boundary_file)\n        if boundary_file.suffix not in OK_BOUNDARY_SUFF:\n            raise ValueError(\n                f\"Boundary file must have one of the following suffixes: {OK_BOUNDARY_SUFF}\"\n            )\n        if not boundary_file.exists():\n            raise FileNotFoundError(f\"Boundary file {boundary_file} does not exist\")\n        if boundary_file.suffix == \".parquet\":\n            boundary_gdf = gpd.read_parquet(boundary_file)\n        else:\n            boundary_gdf = gpd.read_file(boundary_file)\n            if boundary_file.suffix == \".geojson\":  # geojson standard is WGS84\n                boundary_gdf.crs = crs\n\n    if boundary_gdf.crs is not None:\n        boundary_gdf = boundary_gdf.to_crs(crs)\n    # make sure boundary_gdf is a polygon\n    if len(boundary_gdf.geom_type[boundary_gdf.geom_type != \"Polygon\"]) > 0:\n        raise ValueError(\"boundary_gdf must all be Polygons\")\n    # get the boundary as a single polygon\n    boundary_gs = gpd.GeoSeries([boundary_gdf.geometry.unary_union], crs=crs)\n\n    return boundary_gs\n
"},{"location":"api/#network_wrangler.utils.geo.get_point_geometry_from_linestring","title":"get_point_geometry_from_linestring(polyline_geometry, pos=0)","text":"

Get a point geometry from a linestring geometry.

Parameters:

Name Type Description Default polyline_geometry

shapely LineString instance

required pos int

position in the linestring to get the point from. Defaults to 0.

0 Source code in network_wrangler/utils/geo.py
def get_point_geometry_from_linestring(polyline_geometry, pos: int = 0):\n    \"\"\"Get a point geometry from a linestring geometry.\n\n    Args:\n        polyline_geometry: shapely LineString instance\n        pos: position in the linestring to get the point from. Defaults to 0.\n    \"\"\"\n    # WranglerLogger.debug(\n    #    f\"get_point_geometry_from_linestring.polyline_geometry.coords[0]: \\\n    #    {polyline_geometry.coords[0]}.\"\n    # )\n\n    # Note: when upgrading to shapely 2.0, will need to use following command\n    # _point_coords = get_coordinates(polyline_geometry).tolist()[pos]\n    return point_from_xy(*polyline_geometry.coords[pos])\n
"},{"location":"api/#network_wrangler.utils.geo.length_of_linestring_miles","title":"length_of_linestring_miles(gdf)","text":"

Returns a Series with the linestring length in miles.

Parameters:

Name Type Description Default gdf Union[GeoSeries, GeoDataFrame]

GeoDataFrame with linestring geometry. If given a GeoSeries will attempt to convert to a GeoDataFrame.

required Source code in network_wrangler/utils/geo.py
def length_of_linestring_miles(gdf: Union[gpd.GeoSeries, gpd.GeoDataFrame]) -> pd.Series:\n    \"\"\"Returns a Series with the linestring length in miles.\n\n    Args:\n        gdf: GeoDataFrame with linestring geometry.  If given a GeoSeries will attempt to convert\n            to a GeoDataFrame.\n    \"\"\"\n    # WranglerLogger.debug(f\"length_of_linestring_miles.gdf input:\\n{gdf}.\")\n    if isinstance(gdf, gpd.GeoSeries):\n        gdf = gpd.GeoDataFrame(geometry=gdf)\n\n    p_crs = gdf.estimate_utm_crs()\n    gdf = gdf.to_crs(p_crs)\n    METERS_IN_MILES = 1609.34\n    length_miles = gdf.geometry.length / METERS_IN_MILES\n    length_s = pd.Series(length_miles, index=gdf.index)\n\n    return length_s\n
"},{"location":"api/#network_wrangler.utils.geo.linestring_from_lats_lons","title":"linestring_from_lats_lons(df, lat_fields, lon_fields)","text":"

Create a LineString geometry from a DataFrame with lon/lat fields.

Parameters:

Name Type Description Default df

DataFrame with columns for lon/lat fields.

required lat_fields

list of column names for the lat fields.

required lon_fields

list of column names for the lon fields.

required Source code in network_wrangler/utils/geo.py
def linestring_from_lats_lons(df, lat_fields, lon_fields) -> gpd.GeoSeries:\n    \"\"\"Create a LineString geometry from a DataFrame with lon/lat fields.\n\n    Args:\n        df: DataFrame with columns for lon/lat fields.\n        lat_fields: list of column names for the lat fields.\n        lon_fields: list of column names for the lon fields.\n    \"\"\"\n    if len(lon_fields) != len(lat_fields):\n        raise ValueError(\"lon_fields and lat_fields lists must have the same length\")\n\n    line_geometries = gpd.GeoSeries(\n        [\n            LineString([(row[lon], row[lat]) for lon, lat in zip(lon_fields, lat_fields)])\n            for _, row in df.iterrows()\n        ]\n    )\n\n    return gpd.GeoSeries(line_geometries)\n
"},{"location":"api/#network_wrangler.utils.geo.linestring_from_nodes","title":"linestring_from_nodes(links_df, nodes_df, from_node='A', to_node='B', node_pk='model_node_id')","text":"

Creates a LineString geometry GeoSeries from a DataFrame of links and a DataFrame of nodes.

Parameters:

Name Type Description Default links_df DataFrame

DataFrame with columns for from_node and to_node.

required nodes_df GeoDataFrame

GeoDataFrame with geometry column.

required from_node str

column name in links_df for the from node. Defaults to \u201cA\u201d.

'A' to_node str

column name in links_df for the to node. Defaults to \u201cB\u201d.

'B' node_pk str

primary key column name in nodes_df. Defaults to \u201cmodel_node_id\u201d.

'model_node_id' Source code in network_wrangler/utils/geo.py
def linestring_from_nodes(\n    links_df: pd.DataFrame,\n    nodes_df: gpd.GeoDataFrame,\n    from_node: str = \"A\",\n    to_node: str = \"B\",\n    node_pk: str = \"model_node_id\",\n) -> gpd.GeoSeries:\n    \"\"\"Creates a LineString geometry GeoSeries from a DataFrame of links and a DataFrame of nodes.\n\n    Args:\n        links_df: DataFrame with columns for from_node and to_node.\n        nodes_df: GeoDataFrame with geometry column.\n        from_node: column name in links_df for the from node. Defaults to \"A\".\n        to_node: column name in links_df for the to node. Defaults to \"B\".\n        node_pk: primary key column name in nodes_df. Defaults to \"model_node_id\".\n    \"\"\"\n    assert \"geometry\" in nodes_df.columns, \"nodes_df must have a 'geometry' column\"\n\n    idx_name = \"index\" if links_df.index.name is None else links_df.index.name\n    WranglerLogger.debug(f\"Index name: {idx_name}\")\n    required_link_cols = [from_node, to_node]\n\n    if not all([col in links_df.columns for col in required_link_cols]):\n        WranglerLogger.error(\n            f\"links_df.columns missing required columns.\\n\\\n                            links_df.columns: {links_df.columns}\\n\\\n                            required_link_cols: {required_link_cols}\"\n        )\n        raise ValueError(\n            f\"links_df must have columns {required_link_cols} to create linestring from nodes\"\n        )\n\n    links_geo_df = links_df[required_link_cols].copy()\n    # need to continuously reset the index to make sure the index is the same as the link index\n    links_geo_df = (\n        links_geo_df.reset_index()\n        .merge(\n            nodes_df[[node_pk, \"geometry\"]],\n            left_on=from_node,\n            right_on=node_pk,\n            how=\"left\",\n        )\n        .set_index(idx_name)\n    )\n\n    links_geo_df = links_geo_df.rename(columns={\"geometry\": \"geometry_A\"})\n\n    links_geo_df = (\n        links_geo_df.reset_index()\n        .merge(\n            nodes_df[[node_pk, \"geometry\"]],\n            left_on=to_node,\n            right_on=node_pk,\n            how=\"left\",\n        )\n        .set_index(idx_name)\n    )\n\n    links_geo_df = links_geo_df.rename(columns={\"geometry\": \"geometry_B\"})\n\n    # makes sure all nodes exist\n    _missing_geo_links_df = links_geo_df[\n        links_geo_df[\"geometry_A\"].isnull() | links_geo_df[\"geometry_B\"].isnull()\n    ]\n    if not _missing_geo_links_df.empty:\n        missing_nodes = _missing_geo_links_df[[from_node, to_node]].values\n        WranglerLogger.error(\n            f\"Cannot create link geometry from nodes because the nodes are\\\n                             missing from the network. Missing nodes: {missing_nodes}\"\n        )\n        raise MissingNodesError(\"Specified from/to nodes are missing in nodes_df\")\n\n    # create geometry from points\n    links_geo_df[\"geometry\"] = links_geo_df.apply(\n        lambda row: LineString([row[\"geometry_A\"], row[\"geometry_B\"]]), axis=1\n    )\n\n    # convert to GeoDataFrame\n    links_gdf = gpd.GeoDataFrame(links_geo_df[\"geometry\"], geometry=links_geo_df[\"geometry\"])\n    return links_gdf[\"geometry\"]\n
"},{"location":"api/#network_wrangler.utils.geo.location_ref_from_point","title":"location_ref_from_point(geometry, sequence=1, bearing=None, distance_to_next_ref=None)","text":"

Generates a shared street point location reference.

Parameters:

Name Type Description Default geometry Point

Point shapely geometry

required sequence int

Sequence if part of polyline. Defaults to None.

1 bearing float

Direction of line if part of polyline. Defaults to None.

None distance_to_next_ref float

Distnce to next point if part of polyline. Defaults to None.

None

Returns:

Name Type Description LocationReference LocationReference

As defined by sharedStreets.io schema

Source code in network_wrangler/utils/geo.py
def location_ref_from_point(\n    geometry: Point,\n    sequence: int = 1,\n    bearing: float = None,\n    distance_to_next_ref: float = None,\n) -> LocationReference:\n    \"\"\"Generates a shared street point location reference.\n\n    Args:\n        geometry (Point): Point shapely geometry\n        sequence (int, optional): Sequence if part of polyline. Defaults to None.\n        bearing (float, optional): Direction of line if part of polyline. Defaults to None.\n        distance_to_next_ref (float, optional): Distnce to next point if part of polyline.\n            Defaults to None.\n\n    Returns:\n        LocationReference: As defined by sharedStreets.io schema\n    \"\"\"\n    lr = {\n        \"point\": LatLongCoordinates(geometry.coords[0]),\n    }\n\n    for arg in [\"sequence\", \"bearing\", \"distance_to_next_ref\"]:\n        if locals()[arg] is not None:\n            lr[arg] = locals()[arg]\n\n    return LocationReference(**lr)\n
"},{"location":"api/#network_wrangler.utils.geo.location_refs_from_linestring","title":"location_refs_from_linestring(geometry)","text":"

Generates a shared street location reference from linestring.

Parameters:

Name Type Description Default geometry LineString

Shapely LineString instance

required

Returns:

Name Type Description LocationReferences List[LocationReference]

As defined by sharedStreets.io schema

Source code in network_wrangler/utils/geo.py
def location_refs_from_linestring(geometry: LineString) -> List[LocationReference]:\n    \"\"\"Generates a shared street location reference from linestring.\n\n    Args:\n        geometry (LineString): Shapely LineString instance\n\n    Returns:\n        LocationReferences: As defined by sharedStreets.io schema\n    \"\"\"\n    return [\n        location_ref_from_point(\n            point,\n            sequence=i + 1,\n            distance_to_next_ref=point.distance(geometry.coords[i + 1]),\n            bearing=get_bearing(*point.coords[0], *geometry.coords[i + 1]),\n        )\n        for i, point in enumerate(geometry.coords[:-1])\n    ]\n
"},{"location":"api/#network_wrangler.utils.geo.offset_point_with_distance_and_bearing","title":"offset_point_with_distance_and_bearing(lon, lat, distance, bearing)","text":"

Get the new lon-lat (in degrees) given current point (lon-lat), distance and bearing.

Parameters:

Name Type Description Default lon float

longitude of original point

required lat float

latitude of original point

required distance float

distance in meters to offset point by

required bearing float

direction to offset point to in radians

required Source code in network_wrangler/utils/geo.py
def offset_point_with_distance_and_bearing(\n    lon: float, lat: float, distance: float, bearing: float\n) -> List[float]:\n    \"\"\"Get the new lon-lat (in degrees) given current point (lon-lat), distance and bearing.\n\n    Args:\n        lon: longitude of original point\n        lat: latitude of original point\n        distance: distance in meters to offset point by\n        bearing: direction to offset point to in radians\n\n    returns: list of new offset lon-lat\n    \"\"\"\n    # Earth's radius in meters\n    radius = 6378137\n\n    # convert the lat long from degree to radians\n    lat_radians = math.radians(lat)\n    lon_radians = math.radians(lon)\n\n    # calculate the new lat long in radians\n    out_lat_radians = math.asin(\n        math.sin(lat_radians) * math.cos(distance / radius)\n        + math.cos(lat_radians) * math.sin(distance / radius) * math.cos(bearing)\n    )\n\n    out_lon_radians = lon_radians + math.atan2(\n        math.sin(bearing) * math.sin(distance / radius) * math.cos(lat_radians),\n        math.cos(distance / radius) - math.sin(lat_radians) * math.sin(lat_radians),\n    )\n    # convert the new lat long back to degree\n    out_lat = math.degrees(out_lat_radians)\n    out_lon = math.degrees(out_lon_radians)\n\n    return [out_lon, out_lat]\n
"},{"location":"api/#network_wrangler.utils.geo.point_from_xy","title":"point_from_xy(x, y, xy_crs=LAT_LON_CRS, point_crs=LAT_LON_CRS)","text":"

Creates point geometry from x and y coordinates.

Parameters:

Name Type Description Default x

x coordinate, in xy_crs

required y

y coordinate, in xy_crs

required xy_crs int

coordinate reference system in ESPG code for x/y inputs. Defaults to 4326 (WGS84)

LAT_LON_CRS point_crs int

coordinate reference system in ESPG code for point output. Defaults to 4326 (WGS84)

LAT_LON_CRS Source code in network_wrangler/utils/geo.py
def point_from_xy(x, y, xy_crs: int = LAT_LON_CRS, point_crs: int = LAT_LON_CRS):\n    \"\"\"Creates point geometry from x and y coordinates.\n\n    Args:\n        x: x coordinate, in xy_crs\n        y: y coordinate, in xy_crs\n        xy_crs: coordinate reference system in ESPG code for x/y inputs. Defaults to 4326 (WGS84)\n        point_crs: coordinate reference system in ESPG code for point output.\n            Defaults to 4326 (WGS84)\n\n    Returns: Shapely Point in point_crs\n    \"\"\"\n    point = Point(x, y)\n\n    if xy_crs == point_crs:\n        check_point_valid_for_crs(point, point_crs)\n        return point\n\n    if (xy_crs, point_crs) not in transformers:\n        # store transformers in dictionary because they are an \"expensive\" operation\n        transformers[(xy_crs, point_crs)] = Transformer.from_proj(\n            Proj(init=\"epsg:\" + str(xy_crs)),\n            Proj(init=\"epsg:\" + str(point_crs)),\n            always_xy=True,  # required b/c Proj v6+ uses lon/lat instead of x/y\n        )\n\n    return transform(transformers[(xy_crs, point_crs)].transform, point)\n
"},{"location":"api/#network_wrangler.utils.geo.to_points_gdf","title":"to_points_gdf(table, ref_nodes_df=None, ref_road_net=None)","text":"

Convert a table to a GeoDataFrame.

If the table is already a GeoDataFrame, return it as is. Otherwise, attempt to convert the table to a GeoDataFrame using the following methods: 1. If the table has a \u2018geometry\u2019 column, return a GeoDataFrame using that column. 2. If the table has \u2018lat\u2019 and \u2018lon\u2019 columns, return a GeoDataFrame using those columns. 3. If the table has a \u2018*model_node_id\u2019 column, return a GeoDataFrame using that column and the nodes_df provided. If none of the above, raise a ValueError.

Parameters:

Name Type Description Default table DataFrame

DataFrame to convert to GeoDataFrame.

required ref_nodes_df GeoDataFrame

GeoDataFrame of nodes to use to convert model_node_id to geometry.

None ref_road_net 'RoadwayNetwork'

RoadwayNetwork object to use to convert model_node_id to geometry.

None

Returns:

Name Type Description GeoDataFrame GeoDataFrame

GeoDataFrame representation of the table.

Source code in network_wrangler/utils/geo.py
def to_points_gdf(\n    table: pd.DataFrame,\n    ref_nodes_df: gpd.GeoDataFrame = None,\n    ref_road_net: \"RoadwayNetwork\" = None,\n) -> gpd.GeoDataFrame:\n    \"\"\"Convert a table to a GeoDataFrame.\n\n    If the table is already a GeoDataFrame, return it as is. Otherwise, attempt to convert the\n    table to a GeoDataFrame using the following methods:\n    1. If the table has a 'geometry' column, return a GeoDataFrame using that column.\n    2. If the table has 'lat' and 'lon' columns, return a GeoDataFrame using those columns.\n    3. If the table has a '*model_node_id' column, return a GeoDataFrame using that column and the\n         nodes_df provided.\n    If none of the above, raise a ValueError.\n\n    Args:\n        table: DataFrame to convert to GeoDataFrame.\n        ref_nodes_df: GeoDataFrame of nodes to use to convert model_node_id to geometry.\n        ref_road_net: RoadwayNetwork object to use to convert model_node_id to geometry.\n\n    Returns:\n        GeoDataFrame: GeoDataFrame representation of the table.\n    \"\"\"\n    if table is gpd.GeoDataFrame:\n        return table\n\n    WranglerLogger.debug(\"Converting GTFS table to GeoDataFrame\")\n    if \"geometry\" in table.columns:\n        return gpd.GeoDataFrame(table, geometry=\"geometry\")\n\n    lat_cols = list(filter(lambda col: \"lat\" in col, table.columns))\n    lon_cols = list(filter(lambda col: \"lon\" in col, table.columns))\n    model_node_id_cols = list(filter(lambda col: \"model_node_id\" in col, table.columns))\n\n    if not (lat_cols and lon_cols) or not model_node_id_cols:\n        raise ValueError(\n            \"Could not find lat/long, geometry columns or *model_node_id column in \\\n                         table necessary to convert to GeoDataFrame\"\n        )\n\n    if lat_cols and lon_cols:\n        # using first found lat and lon columns\n        return gpd.GeoDataFrame(\n            table,\n            geometry=gpd.points_from_xy(table[lon_cols[0]], table[lat_cols[0]]),\n            crs=\"EPSG:4326\",\n        )\n\n    if model_node_id_cols:\n        node_id_col = model_node_id_cols[0]\n\n        if ref_nodes_df is None:\n            if ref_road_net is None:\n                raise ValueError(\n                    \"Must provide either nodes_df or road_net to convert \\\n                                 model_node_id to geometry\"\n                )\n            ref_nodes_df = ref_road_net.nodes_df\n\n        WranglerLogger.debug(\"Converting table to GeoDataFrame using model_node_id\")\n\n        _table = table.merge(\n            ref_nodes_df[[\"model_node_id\", \"geometry\"]],\n            left_on=node_id_col,\n            right_on=\"model_node_id\",\n        )\n        return gpd.GeoDataFrame(_table, geometry=\"geometry\")\n\n    raise ValueError(\n        \"Could not find lat/long, geometry columns or *model_node_id column in table \\\n                     necessary to convert to GeoDataFrame\"\n    )\n
"},{"location":"api/#network_wrangler.utils.geo.update_nodes_in_linestring_geometry","title":"update_nodes_in_linestring_geometry(original_df, updated_nodes_df, position)","text":"

Updates the nodes in a linestring geometry and returns updated geometry.

Parameters:

Name Type Description Default original_df GeoDataFrame

GeoDataFrame with the model_node_id and linestring geometry

required updated_nodes_df GeoDataFrame

GeoDataFrame with updated node geometries.

required position int

position in the linestring to update with the node.

required Source code in network_wrangler/utils/geo.py
def update_nodes_in_linestring_geometry(\n    original_df: gpd.GeoDataFrame,\n    updated_nodes_df: gpd.GeoDataFrame,\n    position: int,\n) -> gpd.GeoSeries:\n    \"\"\"Updates the nodes in a linestring geometry and returns updated geometry.\n\n    Args:\n        original_df: GeoDataFrame with the `model_node_id` and linestring geometry\n        updated_nodes_df: GeoDataFrame with updated node geometries.\n        position: position in the linestring to update with the node.\n    \"\"\"\n    LINK_FK_NODE = [\"A\", \"B\"]\n    original_index = original_df.index\n\n    updated_df = original_df.reset_index().merge(\n        updated_nodes_df[[\"model_node_id\", \"geometry\"]],\n        left_on=LINK_FK_NODE[position],\n        right_on=\"model_node_id\",\n        suffixes=(\"\", \"_node\"),\n    )\n\n    updated_df[\"geometry\"] = updated_df.apply(\n        lambda row: update_points_in_linestring(\n            row[\"geometry\"], row[\"geometry_node\"].coords[0], position\n        ),\n        axis=1,\n    )\n\n    updated_df = updated_df.reset_index().set_index(original_index.names)\n\n    WranglerLogger.debug(f\"updated_df - AFTER: \\n {updated_df.geometry}\")\n    return updated_df[\"geometry\"]\n
"},{"location":"api/#network_wrangler.utils.geo.update_point_geometry","title":"update_point_geometry(df, ref_point_df, lon_field='X', lat_field='Y', id_field='model_node_id', ref_lon_field='X', ref_lat_field='Y', ref_id_field='model_node_id')","text":"

Returns copy of df with lat and long fields updated with geometry from ref_point_df.

NOTE: does not update \u201cgeometry\u201d field if it exists.

Source code in network_wrangler/utils/geo.py
def update_point_geometry(\n    df: pd.DataFrame,\n    ref_point_df: pd.DataFrame,\n    lon_field: str = \"X\",\n    lat_field: str = \"Y\",\n    id_field: str = \"model_node_id\",\n    ref_lon_field: str = \"X\",\n    ref_lat_field: str = \"Y\",\n    ref_id_field: str = \"model_node_id\",\n) -> pd.DataFrame:\n    \"\"\"Returns copy of df with lat and long fields updated with geometry from ref_point_df.\n\n    NOTE: does not update \"geometry\" field if it exists.\n    \"\"\"\n    df = copy.deepcopy(df)\n\n    ref_df = ref_point_df.rename(\n        columns={\n            ref_lon_field: lon_field,\n            ref_lat_field: lat_field,\n            ref_id_field: id_field,\n        }\n    )\n\n    updated_df = update_df_by_col_value(\n        df,\n        ref_df[[id_field, lon_field, lat_field]],\n        id_field,\n        properties=[lat_field, lon_field],\n        fail_if_missing=False,\n    )\n    return updated_df\n
"},{"location":"api/#network_wrangler.utils.geo.update_points_in_linestring","title":"update_points_in_linestring(linestring, updated_coords, position)","text":"

Replaces a point in a linestring with a new point.

Parameters:

Name Type Description Default linestring LineString

original_linestring

required updated_coords List[float]

updated poimt coordinates

required position int

position in the linestring to update

required Source code in network_wrangler/utils/geo.py
def update_points_in_linestring(\n    linestring: LineString, updated_coords: List[float], position: int\n):\n    \"\"\"Replaces a point in a linestring with a new point.\n\n    Args:\n        linestring (LineString): original_linestring\n        updated_coords (List[float]): updated poimt coordinates\n        position (int): position in the linestring to update\n    \"\"\"\n    coords = [c for c in linestring.coords]\n    coords[position] = updated_coords\n    return LineString(coords)\n
"},{"location":"api/#network_wrangler.utils.df_accessors.DictQueryAccessor","title":"DictQueryAccessor","text":"

Query link, node and shape dataframes using project selection dictionary.

Will overlook any keys which are not columns in the dataframe.

Usage:

selection_dict = {\n    \"lanes\":[1,2,3],\n    \"name\":['6th','Sixth','sixth'],\n    \"drive_access\": 1,\n}\nselected_links_df = links_df.dict_query(selection_dict)\n
Source code in network_wrangler/utils/df_accessors.py
@pd.api.extensions.register_dataframe_accessor(\"dict_query\")\nclass DictQueryAccessor:\n    \"\"\"Query link, node and shape dataframes using project selection dictionary.\n\n    Will overlook any keys which are not columns in the dataframe.\n\n    Usage:\n\n    ```\n    selection_dict = {\n        \"lanes\":[1,2,3],\n        \"name\":['6th','Sixth','sixth'],\n        \"drive_access\": 1,\n    }\n    selected_links_df = links_df.dict_query(selection_dict)\n    ```\n\n    \"\"\"\n\n    def __init__(self, pandas_obj):\n        \"\"\"Initialization function for the dictionary query accessor.\"\"\"\n        self._obj = pandas_obj\n\n    def __call__(self, selection_dict: dict, return_all_if_none: bool = False):\n        \"\"\"Queries the dataframe using the selection dictionary.\n\n        Args:\n            selection_dict (dict): _description_\n            return_all_if_none (bool, optional): If True, will return entire df if dict has\n                 no values. Defaults to False.\n        \"\"\"\n        _selection_dict = {\n            k: v for k, v in selection_dict.items() if k in self._obj.columns and v is not None\n        }\n\n        if not _selection_dict:\n            if return_all_if_none:\n                return self._obj\n            raise ValueError(f\"Relevant part of selection dictionary is empty: {selection_dict}\")\n\n        _sel_query = dict_to_query(_selection_dict)\n        WranglerLogger.debug(f\"_sel_query: \\n   {_sel_query}\")\n        _df = self._obj.query(_sel_query, engine=\"python\")\n\n        if len(_df) == 0:\n            WranglerLogger.warning(\n                f\"No records found in df \\\n                  using selection: {selection_dict}\"\n            )\n        return _df\n
"},{"location":"api/#network_wrangler.utils.df_accessors.DictQueryAccessor.__call__","title":"__call__(selection_dict, return_all_if_none=False)","text":"

Queries the dataframe using the selection dictionary.

Parameters:

Name Type Description Default selection_dict dict

description

required return_all_if_none bool

If True, will return entire df if dict has no values. Defaults to False.

False Source code in network_wrangler/utils/df_accessors.py
def __call__(self, selection_dict: dict, return_all_if_none: bool = False):\n    \"\"\"Queries the dataframe using the selection dictionary.\n\n    Args:\n        selection_dict (dict): _description_\n        return_all_if_none (bool, optional): If True, will return entire df if dict has\n             no values. Defaults to False.\n    \"\"\"\n    _selection_dict = {\n        k: v for k, v in selection_dict.items() if k in self._obj.columns and v is not None\n    }\n\n    if not _selection_dict:\n        if return_all_if_none:\n            return self._obj\n        raise ValueError(f\"Relevant part of selection dictionary is empty: {selection_dict}\")\n\n    _sel_query = dict_to_query(_selection_dict)\n    WranglerLogger.debug(f\"_sel_query: \\n   {_sel_query}\")\n    _df = self._obj.query(_sel_query, engine=\"python\")\n\n    if len(_df) == 0:\n        WranglerLogger.warning(\n            f\"No records found in df \\\n              using selection: {selection_dict}\"\n        )\n    return _df\n
"},{"location":"api/#network_wrangler.utils.df_accessors.DictQueryAccessor.__init__","title":"__init__(pandas_obj)","text":"

Initialization function for the dictionary query accessor.

Source code in network_wrangler/utils/df_accessors.py
def __init__(self, pandas_obj):\n    \"\"\"Initialization function for the dictionary query accessor.\"\"\"\n    self._obj = pandas_obj\n
"},{"location":"api/#network_wrangler.utils.df_accessors.dfHash","title":"dfHash","text":"

Creates a dataframe hash that is compatable with geopandas and various metadata.

Definitely not the fastest, but she seems to work where others have failed.

Source code in network_wrangler/utils/df_accessors.py
@pd.api.extensions.register_dataframe_accessor(\"df_hash\")\nclass dfHash:\n    \"\"\"Creates a dataframe hash that is compatable with geopandas and various metadata.\n\n    Definitely not the fastest, but she seems to work where others have failed.\n    \"\"\"\n\n    def __init__(self, pandas_obj):\n        \"\"\"Initialization function for the dataframe hash.\"\"\"\n        self._obj = pandas_obj\n\n    def __call__(self):\n        \"\"\"Function to hash the dataframe.\"\"\"\n        _value = str(self._obj.values).encode()\n        hash = hashlib.sha1(_value).hexdigest()\n        return hash\n
"},{"location":"api/#network_wrangler.utils.df_accessors.dfHash.__call__","title":"__call__()","text":"

Function to hash the dataframe.

Source code in network_wrangler/utils/df_accessors.py
def __call__(self):\n    \"\"\"Function to hash the dataframe.\"\"\"\n    _value = str(self._obj.values).encode()\n    hash = hashlib.sha1(_value).hexdigest()\n    return hash\n
"},{"location":"api/#network_wrangler.utils.df_accessors.dfHash.__init__","title":"__init__(pandas_obj)","text":"

Initialization function for the dataframe hash.

Source code in network_wrangler/utils/df_accessors.py
def __init__(self, pandas_obj):\n    \"\"\"Initialization function for the dataframe hash.\"\"\"\n    self._obj = pandas_obj\n
"},{"location":"api/#network_wrangler.logger.setup_logging","title":"setup_logging(info_log_filename=None, debug_log_filename='wrangler_{}.debug.log'.format(datetime.now().strftime('%Y_%m_%d__%H_%M_%S')), std_out_level='info')","text":"

Sets up the WranglerLogger w.r.t. the debug file location and if logging to console.

Called by the test_logging fixture in conftest.py and can be called by the user to setup logging for their session. If called multiple times, the logger will be reset.

Parameters:

Name Type Description Default info_log_filename str

the location of the log file that will get created to add the INFO log. The INFO Log is terse, just gives the bare minimum of details. Defaults to file in cwd() wrangler_[datetime].log. To turn off logging to a file, use log_filename = None.

None debug_log_filename str

the location of the log file that will get created to add the DEBUG log The DEBUG log is very noisy, for debugging. Defaults to file in cwd() wrangler_[datetime].log. To turn off logging to a file, use log_filename = None.

format(strftime('%Y_%m_%d__%H_%M_%S')) std_out_level str

the level of logging to the console. One of \u201cinfo\u201d, \u201cwarning\u201d, \u201cdebug\u201d. Defaults to \u201cinfo\u201d but will be set to ERROR if nothing provided matches.

'info' Source code in network_wrangler/logger.py
def setup_logging(\n    info_log_filename: str = None,\n    debug_log_filename: str = \"wrangler_{}.debug.log\".format(\n        datetime.now().strftime(\"%Y_%m_%d__%H_%M_%S\")\n    ),\n    std_out_level: str = \"info\",\n):\n    \"\"\"Sets up the WranglerLogger w.r.t. the debug file location and if logging to console.\n\n    Called by the test_logging fixture in conftest.py and can be called by the user to setup\n    logging for their session. If called multiple times, the logger will be reset.\n\n    Args:\n        info_log_filename: the location of the log file that will get created to add the INFO log.\n            The INFO Log is terse, just gives the bare minimum of details.\n            Defaults to file in cwd() `wrangler_[datetime].log`. To turn off logging to a file,\n            use log_filename = None.\n        debug_log_filename: the location of the log file that will get created to add the DEBUG log\n            The DEBUG log is very noisy, for debugging. Defaults to file in cwd()\n            `wrangler_[datetime].log`. To turn off logging to a file, use log_filename = None.\n        std_out_level: the level of logging to the console. One of \"info\", \"warning\", \"debug\".\n            Defaults to \"info\" but will be set to ERROR if nothing provided matches.\n    \"\"\"\n    # add function variable so that we know if logging has been called\n    setup_logging.called = True\n\n    # Clear handles if any exist already\n    WranglerLogger.handlers = []\n\n    WranglerLogger.setLevel(logging.DEBUG)\n\n    FORMAT = logging.Formatter(\n        \"%(asctime)-15s %(levelname)s: %(message)s\", datefmt=\"%Y-%m-%d %H:%M:%S,\"\n    )\n    if not info_log_filename:\n        info_log_filename = os.path.join(\n            os.getcwd(),\n            \"network_wrangler_{}.info.log\".format(datetime.now().strftime(\"%Y_%m_%d__%H_%M_%S\")),\n        )\n\n    info_file_handler = logging.StreamHandler(open(info_log_filename, \"w\"))\n    info_file_handler.setLevel(logging.INFO)\n    info_file_handler.setFormatter(FORMAT)\n    WranglerLogger.addHandler(info_file_handler)\n\n    # create debug file only when debug_log_filename is provided\n    if debug_log_filename:\n        debug_log_handler = logging.StreamHandler(open(debug_log_filename, \"w\"))\n        debug_log_handler.setLevel(logging.DEBUG)\n        debug_log_handler.setFormatter(FORMAT)\n        WranglerLogger.addHandler(debug_log_handler)\n\n    console_handler = logging.StreamHandler(sys.stdout)\n    console_handler.setLevel(logging.DEBUG)\n    console_handler.setFormatter(FORMAT)\n    WranglerLogger.addHandler(console_handler)\n    if std_out_level == \"debug\":\n        console_handler.setLevel(logging.DEBUG)\n    elif std_out_level == \"info\":\n        console_handler.setLevel(logging.DEBUG)\n    elif std_out_level == \"warning\":\n        console_handler.setLevel(logging.WARNING)\n    else:\n        console_handler.setLevel(logging.ERROR)\n
"},{"location":"data_models/","title":"Data Models","text":""},{"location":"data_models/#roadway","title":"Roadway","text":""},{"location":"data_models/#tables","title":"Tables","text":"

Datamodels for Roadway Network Tables.

This module contains the datamodels used to validate the format and types of Roadway Network tables.

Includes: - RoadLinksTable - RoadNodesTable - RoadShapesTable - ExplodedScopedLinkPropertyTable

"},{"location":"data_models/#network_wrangler.models.roadway.tables.ExplodedScopedLinkPropertyTable","title":"ExplodedScopedLinkPropertyTable","text":"

Bases: DataFrameModel

Datamodel used to validate an exploded links_df by scope.

Source code in network_wrangler/models/roadway/tables.py
class ExplodedScopedLinkPropertyTable(DataFrameModel):\n    \"\"\"Datamodel used to validate an exploded links_df by scope.\"\"\"\n\n    model_link_id: Series[int]\n    category: Series[Any]\n    timespan: Series[list[str]]\n    start_time: Series[dt.datetime]\n    end_time: Series[dt.datetime]\n    scoped: Series[Any] = pa.Field(default=None, nullable=True)\n\n    class Config:\n        \"\"\"Config for ExplodedScopedLinkPropertySchema.\"\"\"\n\n        name = \"ExplodedScopedLinkPropertySchema\"\n        coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.ExplodedScopedLinkPropertyTable.Config","title":"Config","text":"

Config for ExplodedScopedLinkPropertySchema.

Source code in network_wrangler/models/roadway/tables.py
class Config:\n    \"\"\"Config for ExplodedScopedLinkPropertySchema.\"\"\"\n\n    name = \"ExplodedScopedLinkPropertySchema\"\n    coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadLinksTable","title":"RoadLinksTable","text":"

Bases: DataFrameModel

Datamodel used to validate if links_df is of correct format and types.

Source code in network_wrangler/models/roadway/tables.py
class RoadLinksTable(DataFrameModel):\n    \"\"\"Datamodel used to validate if links_df is of correct format and types.\"\"\"\n\n    model_link_id: Series[int] = pa.Field(coerce=True, unique=True)\n    model_link_id_idx: Optional[Series[int]] = pa.Field(coerce=True, unique=True)\n    A: Series[int] = pa.Field(nullable=False, coerce=True)\n    B: Series[int] = pa.Field(nullable=False, coerce=True)\n    geometry: GeoSeries = pa.Field(nullable=False)\n    name: Series[str] = pa.Field(nullable=True)\n    rail_only: Series[bool] = pa.Field(coerce=True, nullable=False, default=False)\n    bus_only: Series[bool] = pa.Field(coerce=True, nullable=False, default=False)\n    drive_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True)\n    bike_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True)\n    walk_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True)\n    distance: Series[float] = pa.Field(coerce=True, nullable=True)\n\n    roadway: Series[str] = pa.Field(nullable=False, default=\"road\")\n    managed: Series[int] = pa.Field(coerce=True, nullable=False, default=0)\n\n    shape_id: Series[str] = pa.Field(coerce=True, nullable=True)\n    lanes: Series[Any] = pa.Field(coerce=True, nullable=True, default=0)\n    price: Series[float] = pa.Field(coerce=True, nullable=False, default=0)\n\n    # Optional Fields\n    access: Optional[Series[Any]] = pa.Field(coerce=True, nullable=True, default=None)\n\n    sc_lanes: Optional[Series[object]] = pa.Field(coerce=True, nullable=True, default=None)\n    sc_price: Optional[Series[object]] = pa.Field(coerce=True, nullable=True, default=None)\n\n    ML_lanes: Optional[Series[Int64]] = pa.Field(coerce=True, nullable=True, default=None)\n    ML_price: Optional[Series[float]] = pa.Field(coerce=True, nullable=True, default=0)\n    ML_access: Optional[Series[Any]] = pa.Field(coerce=True, nullable=True, default=True)\n    ML_access_point: Optional[Series[bool]] = pa.Field(\n        coerce=True,\n        default=False,\n    )\n    ML_egress_point: Optional[Series[bool]] = pa.Field(\n        coerce=True,\n        default=False,\n    )\n    sc_ML_lanes: Optional[Series[object]] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=None,\n    )\n    sc_ML_price: Optional[Series[object]] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=None,\n    )\n    sc_ML_access: Optional[Series[object]] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=None,\n    )\n\n    ML_geometry: Optional[GeoSeries] = pa.Field(nullable=True, coerce=True, default=None)\n    ML_shape_id: Optional[Series[str]] = pa.Field(nullable=True, coerce=True, default=None)\n\n    truck_access: Optional[Series[bool]] = pa.Field(coerce=True, nullable=True, default=True)\n    osm_link_id: Series[str] = pa.Field(coerce=True, nullable=True, default=\"\")\n    # todo this should be List[dict] but ranch output something else so had to have it be Any.\n    locationReferences: Optional[Series[Any]] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=\"\",\n    )\n\n    GP_A: Optional[Series[Int64]] = pa.Field(coerce=True, nullable=True, default=None)\n    GP_B: Optional[Series[Int64]] = pa.Field(coerce=True, nullable=True, default=None)\n\n    class Config:\n        \"\"\"Config for RoadLinksTable.\"\"\"\n\n        name = \"RoadLinksTable\"\n        add_missing_columns = True\n        coerce = True\n\n    # @pa.dataframe_check\n    # def unique_ab(cls, df: pd.DataFrame) -> bool:\n    #     \"\"\"Check that combination of A and B are unique.\"\"\"\n    #     return ~df[[\"A\", \"B\"]].duplicated()\n\n    # TODO add check that if there is managed>1 anywhere, that ML_ columns are present.\n\n    @pa.dataframe_check\n    def check_scoped_fields(cls, df: pd.DataFrame) -> Series[bool]:\n        \"\"\"Checks that all fields starting with 'sc_' or 'sc_ML_' are valid ScopedLinkValueList.\n\n        Custom check to validate fields starting with 'sc_' or 'sc_ML_'\n        against a ScopedLinkValueItem model, handling both mandatory and optional fields.\n        \"\"\"\n        scoped_fields = [\n            col for col in df.columns if col.startswith(\"sc_\") or col.startswith(\"sc_ML\")\n        ]\n        results = []\n\n        for field in scoped_fields:\n            if df[field].notna().any():\n                results.append(\n                    df[field].dropna().apply(validate_pyd, args=(ScopedLinkValueList,)).all()\n                )\n            else:\n                # Handling optional fields: Assume validation is true if the field is entirely NA\n                results.append(True)\n\n        # Combine all results: True if all fields pass validation\n        return pd.Series(all(results), index=df.index)\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadLinksTable.Config","title":"Config","text":"

Config for RoadLinksTable.

Source code in network_wrangler/models/roadway/tables.py
class Config:\n    \"\"\"Config for RoadLinksTable.\"\"\"\n\n    name = \"RoadLinksTable\"\n    add_missing_columns = True\n    coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadLinksTable.check_scoped_fields","title":"check_scoped_fields(df)","text":"

Checks that all fields starting with \u2018sc_\u2019 or \u2018sc_ML_\u2019 are valid ScopedLinkValueList.

Custom check to validate fields starting with \u2018sc_\u2019 or \u2018sc_ML_\u2019 against a ScopedLinkValueItem model, handling both mandatory and optional fields.

Source code in network_wrangler/models/roadway/tables.py
@pa.dataframe_check\ndef check_scoped_fields(cls, df: pd.DataFrame) -> Series[bool]:\n    \"\"\"Checks that all fields starting with 'sc_' or 'sc_ML_' are valid ScopedLinkValueList.\n\n    Custom check to validate fields starting with 'sc_' or 'sc_ML_'\n    against a ScopedLinkValueItem model, handling both mandatory and optional fields.\n    \"\"\"\n    scoped_fields = [\n        col for col in df.columns if col.startswith(\"sc_\") or col.startswith(\"sc_ML\")\n    ]\n    results = []\n\n    for field in scoped_fields:\n        if df[field].notna().any():\n            results.append(\n                df[field].dropna().apply(validate_pyd, args=(ScopedLinkValueList,)).all()\n            )\n        else:\n            # Handling optional fields: Assume validation is true if the field is entirely NA\n            results.append(True)\n\n    # Combine all results: True if all fields pass validation\n    return pd.Series(all(results), index=df.index)\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadNodesTable","title":"RoadNodesTable","text":"

Bases: DataFrameModel

Datamodel used to validate if links_df is of correct format and types.

Source code in network_wrangler/models/roadway/tables.py
class RoadNodesTable(DataFrameModel):\n    \"\"\"Datamodel used to validate if links_df is of correct format and types.\"\"\"\n\n    model_node_id: Series[int] = pa.Field(coerce=True, unique=True, nullable=False)\n    model_node_idx: Optional[Series[int]] = pa.Field(coerce=True, unique=True, nullable=False)\n    geometry: GeoSeries\n    X: Series[float] = pa.Field(coerce=True, nullable=False)\n    Y: Series[float] = pa.Field(coerce=True, nullable=False)\n\n    # optional fields\n    osm_node_id: Series[str] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=\"\",\n    )\n\n    inboundReferenceIds: Optional[Series[list[str]]] = pa.Field(coerce=True, nullable=True)\n    outboundReferenceIds: Optional[Series[list[str]]] = pa.Field(coerce=True, nullable=True)\n\n    class Config:\n        \"\"\"Config for RoadNodesTable.\"\"\"\n\n        name = \"RoadNodesTable\"\n        add_missing_columns = True\n        coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadNodesTable.Config","title":"Config","text":"

Config for RoadNodesTable.

Source code in network_wrangler/models/roadway/tables.py
class Config:\n    \"\"\"Config for RoadNodesTable.\"\"\"\n\n    name = \"RoadNodesTable\"\n    add_missing_columns = True\n    coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadShapesTable","title":"RoadShapesTable","text":"

Bases: DataFrameModel

Datamodel used to validate if links_df is of correct format and types.

Source code in network_wrangler/models/roadway/tables.py
class RoadShapesTable(DataFrameModel):\n    \"\"\"Datamodel used to validate if links_df is of correct format and types.\"\"\"\n\n    shape_id: Series[str] = pa.Field(unique=False)\n    shape_id_idx: Optional[Series[int]] = pa.Field(unique=False)\n\n    geometry: GeoSeries = pa.Field()\n    ref_shape_id: Optional[Series] = pa.Field(nullable=True)\n\n    class Config:\n        \"\"\"Config for RoadShapesTable.\"\"\"\n\n        name = \"ShapesSchema\"\n        coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadShapesTable.Config","title":"Config","text":"

Config for RoadShapesTable.

Source code in network_wrangler/models/roadway/tables.py
class Config:\n    \"\"\"Config for RoadShapesTable.\"\"\"\n\n    name = \"ShapesSchema\"\n    coerce = True\n
"},{"location":"data_models/#types","title":"Types","text":"

Complex roadway types defined using Pydantic models to facilitation validation.

"},{"location":"data_models/#network_wrangler.models.roadway.types.LocationReferences","title":"LocationReferences = conlist(LocationReference, min_length=2) module-attribute","text":"

List of at least two LocationReferences which define a path.

"},{"location":"data_models/#network_wrangler.models.roadway.types.LocationReference","title":"LocationReference","text":"

Bases: BaseModel

SharedStreets-defined object for location reference.

Source code in network_wrangler/models/roadway/types.py
class LocationReference(BaseModel):\n    \"\"\"SharedStreets-defined object for location reference.\"\"\"\n\n    sequence: PositiveInt\n    point: LatLongCoordinates\n    bearing: float = Field(None, ge=-360, le=360)\n    distanceToNextRef: NonNegativeFloat\n    intersectionId: str\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopeLinkValueError","title":"ScopeLinkValueError","text":"

Bases: Exception

Raised when there is an issue with ScopedLinkValueList.

Source code in network_wrangler/models/roadway/types.py
class ScopeLinkValueError(Exception):\n    \"\"\"Raised when there is an issue with ScopedLinkValueList.\"\"\"\n\n    pass\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueItem","title":"ScopedLinkValueItem","text":"

Bases: RecordModel

Define a link property scoped by timespan or category.

Source code in network_wrangler/models/roadway/types.py
class ScopedLinkValueItem(RecordModel):\n    \"\"\"Define a link property scoped by timespan or category.\"\"\"\n\n    require_any_of = [\"category\", \"timespan\"]\n    model_config = ConfigDict(extra=\"forbid\")\n    category: Optional[Union[str, int]] = Field(default=DEFAULT_CATEGORY)\n    timespan: Optional[list[TimeString]] = Field(default=DEFAULT_TIMESPAN)\n    value: Union[int, float, str]\n\n    @property\n    def timespan_dt(self) -> list[list[datetime]]:\n        \"\"\"Convert timespan to list of datetime objects.\"\"\"\n        return str_to_time_list(self.timespan)\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueItem.timespan_dt","title":"timespan_dt: list[list[datetime]] property","text":"

Convert timespan to list of datetime objects.

"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueList","title":"ScopedLinkValueList","text":"

Bases: RootListMixin, RootModel

List of non-conflicting ScopedLinkValueItems.

Source code in network_wrangler/models/roadway/types.py
class ScopedLinkValueList(RootListMixin, RootModel):\n    \"\"\"List of non-conflicting ScopedLinkValueItems.\"\"\"\n\n    root: list[ScopedLinkValueItem]\n\n    def overlapping_timespans(self, timespan: Timespan):\n        \"\"\"Identify overlapping timespans in the list.\"\"\"\n        timespan_dt = str_to_time_list(timespan)\n        return [i for i in self if dt_overlaps(i.timespan_dt, timespan_dt)]\n\n    @model_validator(mode=\"after\")\n    def check_conflicting_scopes(self):\n        \"\"\"Check for conflicting scopes in the list.\"\"\"\n        conflicts = []\n        for i in self:\n            if i.timespan == DEFAULT_TIMESPAN:\n                continue\n            overlapping_ts_i = self.overlapping_timespans(i.timespan)\n            for j in overlapping_ts_i:\n                if j == i:\n                    continue\n                if j.category == i.category:\n                    conflicts.append((i, j))\n        if conflicts:\n            WranglerLogger.error(\"Found conflicting scopes in ScopedPropertySetList:\\n{conflicts}\")\n            raise ScopeLinkValueError(\"Conflicting scopes in ScopedPropertySetList\")\n\n        return self\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueList.check_conflicting_scopes","title":"check_conflicting_scopes()","text":"

Check for conflicting scopes in the list.

Source code in network_wrangler/models/roadway/types.py
@model_validator(mode=\"after\")\ndef check_conflicting_scopes(self):\n    \"\"\"Check for conflicting scopes in the list.\"\"\"\n    conflicts = []\n    for i in self:\n        if i.timespan == DEFAULT_TIMESPAN:\n            continue\n        overlapping_ts_i = self.overlapping_timespans(i.timespan)\n        for j in overlapping_ts_i:\n            if j == i:\n                continue\n            if j.category == i.category:\n                conflicts.append((i, j))\n    if conflicts:\n        WranglerLogger.error(\"Found conflicting scopes in ScopedPropertySetList:\\n{conflicts}\")\n        raise ScopeLinkValueError(\"Conflicting scopes in ScopedPropertySetList\")\n\n    return self\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueList.overlapping_timespans","title":"overlapping_timespans(timespan)","text":"

Identify overlapping timespans in the list.

Source code in network_wrangler/models/roadway/types.py
def overlapping_timespans(self, timespan: Timespan):\n    \"\"\"Identify overlapping timespans in the list.\"\"\"\n    timespan_dt = str_to_time_list(timespan)\n    return [i for i in self if dt_overlaps(i.timespan_dt, timespan_dt)]\n
"},{"location":"data_models/#transit","title":"Transit","text":"

Main functionality for GTFS tables including Feed object.

"},{"location":"data_models/#network_wrangler.transit.feed.feed.Feed","title":"Feed","text":"

Bases: DBModelMixin

Wrapper class around Wrangler flavored GTFS feed.

Most functionality derives from mixin class DBModelMixin which provides: - validation of tables to schemas when setting a table attribute (e.g. self.trips = trips_df) - validation of fks when setting a table attribute (e.g. self.trips = trips_df) - hashing and deep copy functionality - overload of eq to apply only to tables in table_names. - convenience methods for accessing tables

Attributes:

Name Type Description table_names

list of table names in GTFS feed.

tables

list tables as dataframes.

stop_times

stop_times dataframe with roadway node_ids

stops

stops dataframe

shapes

shapes dataframe

trips

trips dataframe

frequencies

frequencies dataframe

routes

route dataframe

net

TransitNetwork object

Source code in network_wrangler/transit/feed/feed.py
class Feed(DBModelMixin):\n    \"\"\"Wrapper class around Wrangler flavored GTFS feed.\n\n    Most functionality derives from mixin class DBModelMixin which provides:\n    - validation of tables to schemas when setting a table attribute (e.g. self.trips = trips_df)\n    - validation of fks when setting a table attribute (e.g. self.trips = trips_df)\n    - hashing and deep copy functionality\n    - overload of __eq__ to apply only to tables in table_names.\n    - convenience methods for accessing tables\n\n    Attributes:\n        table_names: list of table names in GTFS feed.\n        tables: list tables as dataframes.\n        stop_times: stop_times dataframe with roadway node_ids\n        stops: stops dataframe\n        shapes: shapes dataframe\n        trips: trips dataframe\n        frequencies: frequencies dataframe\n        routes: route dataframe\n        net: TransitNetwork object\n    \"\"\"\n\n    # the ordering here matters because the stops need to be added before stop_times if\n    # stop times needs to be converted\n    _table_models = {\n        \"agencies\": AgenciesTable,\n        \"frequencies\": FrequenciesTable,\n        \"routes\": RoutesTable,\n        \"shapes\": WranglerShapesTable,\n        \"stops\": WranglerStopsTable,\n        \"trips\": TripsTable,\n        \"stop_times\": WranglerStopTimesTable,\n    }\n\n    _converters = {\"stop_times\": gtfs_to_wrangler_stop_times}\n\n    table_names = [\n        \"frequencies\",\n        \"routes\",\n        \"shapes\",\n        \"stops\",\n        \"trips\",\n        \"stop_times\",\n    ]\n\n    optional_table_names = [\"agencies\"]\n\n    def __init__(self, **kwargs):\n        \"\"\"Create a Feed object from a dictionary of DataFrames representing a GTFS feed.\n\n        Args:\n            kwargs: A dictionary containing DataFrames representing the tables of a GTFS feed.\n        \"\"\"\n        self._net = None\n        self.feed_path: Path = None\n        self.initialize_tables(**kwargs)\n\n        # Set extra provided attributes but just FYI in logger.\n        extra_attr = {k: v for k, v in kwargs.items() if k not in self.table_names}\n        if extra_attr:\n            WranglerLogger.info(f\"Adding additional attributes to Feed: {extra_attr.keys()}\")\n        for k, v in extra_attr:\n            self.__setattr__(k, v)\n\n    def set_by_id(\n        self,\n        table_name: str,\n        set_df: pd.DataFrame,\n        id_property: str = \"trip_id\",\n        properties: list[str] = None,\n    ):\n        \"\"\"Set property values in a specific table for a list of IDs.\n\n        Args:\n            table_name (str): Name of the table to modify.\n            set_df (pd.DataFrame): DataFrame with columns 'trip_id' and 'value' containing\n                trip IDs and values to set for the specified property.\n            id_property: Property to use as ID to set by. Defaults to \"trip_id.\n            properties: List of properties to set which are in set_df. If not specified, will set\n                all properties.\n        \"\"\"\n        table_df = self.get_table(table_name)\n        updated_df = update_df_by_col_value(table_df, set_df, id_property, properties=properties)\n        self.__dict__[table_name] = updated_df\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.Feed.__init__","title":"__init__(**kwargs)","text":"

Create a Feed object from a dictionary of DataFrames representing a GTFS feed.

Parameters:

Name Type Description Default kwargs

A dictionary containing DataFrames representing the tables of a GTFS feed.

{} Source code in network_wrangler/transit/feed/feed.py
def __init__(self, **kwargs):\n    \"\"\"Create a Feed object from a dictionary of DataFrames representing a GTFS feed.\n\n    Args:\n        kwargs: A dictionary containing DataFrames representing the tables of a GTFS feed.\n    \"\"\"\n    self._net = None\n    self.feed_path: Path = None\n    self.initialize_tables(**kwargs)\n\n    # Set extra provided attributes but just FYI in logger.\n    extra_attr = {k: v for k, v in kwargs.items() if k not in self.table_names}\n    if extra_attr:\n        WranglerLogger.info(f\"Adding additional attributes to Feed: {extra_attr.keys()}\")\n    for k, v in extra_attr:\n        self.__setattr__(k, v)\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.Feed.set_by_id","title":"set_by_id(table_name, set_df, id_property='trip_id', properties=None)","text":"

Set property values in a specific table for a list of IDs.

Parameters:

Name Type Description Default table_name str

Name of the table to modify.

required set_df DataFrame

DataFrame with columns \u2018trip_id\u2019 and \u2018value\u2019 containing trip IDs and values to set for the specified property.

required id_property str

Property to use as ID to set by. Defaults to \u201ctrip_id.

'trip_id' properties list[str]

List of properties to set which are in set_df. If not specified, will set all properties.

None Source code in network_wrangler/transit/feed/feed.py
def set_by_id(\n    self,\n    table_name: str,\n    set_df: pd.DataFrame,\n    id_property: str = \"trip_id\",\n    properties: list[str] = None,\n):\n    \"\"\"Set property values in a specific table for a list of IDs.\n\n    Args:\n        table_name (str): Name of the table to modify.\n        set_df (pd.DataFrame): DataFrame with columns 'trip_id' and 'value' containing\n            trip IDs and values to set for the specified property.\n        id_property: Property to use as ID to set by. Defaults to \"trip_id.\n        properties: List of properties to set which are in set_df. If not specified, will set\n            all properties.\n    \"\"\"\n    table_df = self.get_table(table_name)\n    updated_df = update_df_by_col_value(table_df, set_df, id_property, properties=properties)\n    self.__dict__[table_name] = updated_df\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.FeedValidationError","title":"FeedValidationError","text":"

Bases: Exception

Raised when there is an issue with the validation of the GTFS data.

Source code in network_wrangler/transit/feed/feed.py
class FeedValidationError(Exception):\n    \"\"\"Raised when there is an issue with the validation of the GTFS data.\"\"\"\n\n    pass\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.merge_shapes_to_stop_times","title":"merge_shapes_to_stop_times(stop_times, shapes, trips)","text":"

Add shape_id and shape_pt_sequence to stop_times dataframe.

Parameters:

Name Type Description Default stop_times DataFrame[WranglerStopTimesTable]

stop_times dataframe to add shape_id and shape_pt_sequence to.

required shapes DataFrame[WranglerShapesTable]

shapes dataframe to add to stop_times.

required trips DataFrame[TripsTable]

trips dataframe to link stop_times to shapes

required

Returns:

Type Description DataFrame[WranglerStopTimesTable]

stop_times dataframe with shape_id and shape_pt_sequence added.

Source code in network_wrangler/transit/feed/feed.py
def merge_shapes_to_stop_times(\n    stop_times: DataFrame[WranglerStopTimesTable],\n    shapes: DataFrame[WranglerShapesTable],\n    trips: DataFrame[TripsTable],\n) -> DataFrame[WranglerStopTimesTable]:\n    \"\"\"Add shape_id and shape_pt_sequence to stop_times dataframe.\n\n    Args:\n        stop_times: stop_times dataframe to add shape_id and shape_pt_sequence to.\n        shapes: shapes dataframe to add to stop_times.\n        trips: trips dataframe to link stop_times to shapes\n\n    Returns:\n        stop_times dataframe with shape_id and shape_pt_sequence added.\n    \"\"\"\n    stop_times_w_shape_id = stop_times.merge(\n        trips[[\"trip_id\", \"shape_id\"]], on=\"trip_id\", how=\"left\"\n    )\n\n    stop_times_w_shapes = stop_times_w_shape_id.merge(\n        shapes,\n        how=\"left\",\n        left_on=[\"shape_id\", \"model_node_id\"],\n        right_on=[\"shape_id\", \"shape_model_node_id\"],\n    )\n    stop_times_w_shapes = stop_times_w_shapes.drop(columns=[\"shape_model_node_id\"])\n    return stop_times_w_shapes\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.stop_count_by_trip","title":"stop_count_by_trip(stop_times)","text":"

Returns dataframe with trip_id and stop_count from stop_times.

Source code in network_wrangler/transit/feed/feed.py
def stop_count_by_trip(\n    stop_times: DataFrame[WranglerStopTimesTable],\n) -> pd.DataFrame:\n    \"\"\"Returns dataframe with trip_id and stop_count from stop_times.\"\"\"\n    stops_count = stop_times.groupby(\"trip_id\").size()\n    return stops_count.reset_index(name=\"stop_count\")\n
"},{"location":"data_models/#pure-gtfs-tables","title":"Pure GTFS Tables","text":"

Models for when you want to use vanilla (non wrangler) GTFS.

"},{"location":"data_models/#network_wrangler.models.gtfs.AgencyRecord","title":"AgencyRecord","text":"

Bases: BaseModel

Represents a transit agency.

Source code in network_wrangler/models/gtfs/records.py
class AgencyRecord(BaseModel):\n    \"\"\"Represents a transit agency.\"\"\"\n\n    agency_id: AgencyID\n    agency_name: Optional[AgencyName]\n    agency_url: Optional[HttpUrl]\n    agency_timezone: Timezone\n    agency_lang: Optional[Language]\n    agency_phone: Optional[AgencyPhone]\n    agency_fare_url: Optional[AgencyFareUrl]\n    agency_email: Optional[AgencyEmail]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.BikesAllowed","title":"BikesAllowed","text":"

Bases: IntEnum

Indicates whether bicycles are allowed.

Source code in network_wrangler/models/gtfs/types.py
class BikesAllowed(IntEnum):\n    \"\"\"Indicates whether bicycles are allowed.\"\"\"\n\n    NO_INFORMATION = 0\n    ALLOWED = 1\n    NOT_ALLOWED = 2\n
"},{"location":"data_models/#network_wrangler.models.gtfs.DirectionID","title":"DirectionID","text":"

Bases: IntEnum

Indicates the direction of travel for a trip.

Source code in network_wrangler/models/gtfs/types.py
class DirectionID(IntEnum):\n    \"\"\"Indicates the direction of travel for a trip.\"\"\"\n\n    OUTBOUND = 0\n    INBOUND = 1\n
"},{"location":"data_models/#network_wrangler.models.gtfs.FrequencyRecord","title":"FrequencyRecord","text":"

Bases: BaseModel

Represents headway (time between trips) for routes with variable frequency.

Source code in network_wrangler/models/gtfs/records.py
class FrequencyRecord(BaseModel):\n    \"\"\"Represents headway (time between trips) for routes with variable frequency.\"\"\"\n\n    trip_id: TripID\n    start_time: StartTime\n    end_time: EndTime\n    headway_secs: HeadwaySecs\n
"},{"location":"data_models/#network_wrangler.models.gtfs.LocationType","title":"LocationType","text":"

Bases: IntEnum

Indicates the type of node the stop record represents.

Full documentation: https://gtfs.org/schedule/reference/#stopstxt

Source code in network_wrangler/models/gtfs/types.py
class LocationType(IntEnum):\n    \"\"\"Indicates the type of node the stop record represents.\n\n    Full documentation: https://gtfs.org/schedule/reference/#stopstxt\n    \"\"\"\n\n    STOP_PLATFORM = 0\n    STATION = 1\n    ENTRANCE_EXIT = 2\n    GENERIC_NODE = 3\n    BOARDING_AREA = 4\n
"},{"location":"data_models/#network_wrangler.models.gtfs.MockPaModel","title":"MockPaModel","text":"

Mock model for when Pandera is not installed.

Source code in network_wrangler/models/gtfs/__init__.py
class MockPaModel:\n    \"\"\"Mock model for when Pandera is not installed.\"\"\"\n\n    def __init__(self, **kwargs):\n        \"\"\"Mock modle initiation.\"\"\"\n        for key, value in kwargs.items():\n            setattr(self, key, value)\n
"},{"location":"data_models/#network_wrangler.models.gtfs.MockPaModel.__init__","title":"__init__(**kwargs)","text":"

Mock modle initiation.

Source code in network_wrangler/models/gtfs/__init__.py
def __init__(self, **kwargs):\n    \"\"\"Mock modle initiation.\"\"\"\n    for key, value in kwargs.items():\n        setattr(self, key, value)\n
"},{"location":"data_models/#network_wrangler.models.gtfs.PickupDropoffType","title":"PickupDropoffType","text":"

Bases: IntEnum

Indicates the pickup method for passengers at a stop.

Full documentation: https://gtfs.org/schedule/reference

Source code in network_wrangler/models/gtfs/types.py
class PickupDropoffType(IntEnum):\n    \"\"\"Indicates the pickup method for passengers at a stop.\n\n    Full documentation: https://gtfs.org/schedule/reference\n    \"\"\"\n\n    REGULAR = 0\n    NONE = 1\n    PHONE_AGENCY = 2\n    COORDINATE_WITH_DRIVER = 3\n
"},{"location":"data_models/#network_wrangler.models.gtfs.RouteRecord","title":"RouteRecord","text":"

Bases: BaseModel

Represents a transit route.

Source code in network_wrangler/models/gtfs/records.py
class RouteRecord(BaseModel):\n    \"\"\"Represents a transit route.\"\"\"\n\n    route_id: RouteID\n    agency_id: AgencyID\n    route_type: RouteType\n    route_short_name: RouteShortName\n    route_long_name: RouteLongName\n\n    # Optional\n    route_desc: Optional[RouteDesc]\n    route_url: Optional[RouteUrl]\n    route_color: Optional[RouteColor]\n    route_text_color: Optional[RouteTextColor]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.RouteType","title":"RouteType","text":"

Bases: IntEnum

Indicates the type of transportation used on a route.

Full documentation: https://gtfs.org/schedule/reference

Source code in network_wrangler/models/gtfs/types.py
class RouteType(IntEnum):\n    \"\"\"Indicates the type of transportation used on a route.\n\n    Full documentation: https://gtfs.org/schedule/reference\n    \"\"\"\n\n    TRAM = 0\n    SUBWAY = 1\n    RAIL = 2\n    BUS = 3\n    FERRY = 4\n    CABLE_TRAM = 5\n    AERIAL_LIFT = 6\n    FUNICULAR = 7\n    TROLLEYBUS = 11\n    MONORAIL = 12\n
"},{"location":"data_models/#network_wrangler.models.gtfs.ShapeRecord","title":"ShapeRecord","text":"

Bases: BaseModel

Represents a point on a path (shape) that a transit vehicle takes.

Source code in network_wrangler/models/gtfs/records.py
class ShapeRecord(BaseModel):\n    \"\"\"Represents a point on a path (shape) that a transit vehicle takes.\"\"\"\n\n    shape_id: ShapeID\n    shape_pt_lat: Latitude\n    shape_pt_lon: Longitude\n    shape_pt_sequence: ShapePtSequence\n\n    # Optional\n    shape_dist_traveled: Optional[ShapeDistTraveled]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.StopRecord","title":"StopRecord","text":"

Bases: BaseModel

Represents a stop or station where vehicles pick up or drop off passengers.

Source code in network_wrangler/models/gtfs/records.py
class StopRecord(BaseModel):\n    \"\"\"Represents a stop or station where vehicles pick up or drop off passengers.\"\"\"\n\n    stop_id: StopID\n    stop_lat: Latitude\n    stop_lon: Longitude\n\n    # Optional\n    stop_code: Optional[StopCode]\n    stop_name: Optional[StopName]\n    tts_stop_name: Optional[TTSStopName]\n    stop_desc: Optional[StopDesc]\n    zone_id: Optional[ZoneID]\n    stop_url: Optional[StopUrl]\n    location_type: Optional[LocationType]\n    parent_station: Optional[ParentStation]\n    stop_timezone: Optional[Timezone]\n    wheelchair_boarding: Optional[WheelchairAccessible]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.StopTimeRecord","title":"StopTimeRecord","text":"

Bases: BaseModel

Times that a vehicle arrives at and departs from stops for each trip.

Source code in network_wrangler/models/gtfs/records.py
class StopTimeRecord(BaseModel):\n    \"\"\"Times that a vehicle arrives at and departs from stops for each trip.\"\"\"\n\n    trip_id: TripID\n    arrival_time: ArrivalTime\n    departure_time: DepartureTime\n    stop_id: StopID\n    stop_sequence: StopSequence\n\n    # Optional\n    stop_headsign: Optional[StopHeadsign]\n    pickup_type: Optional[PickupType]\n    drop_off_type: Optional[DropoffType]\n    shape_dist_traveled: Optional[ShapeDistTraveled]\n    timepoint: Optional[Timepoint]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.TimepointType","title":"TimepointType","text":"

Bases: IntEnum

Indicates whether the specified time is exact or approximate.

Full documentation: https://gtfs.org/schedule/reference

Source code in network_wrangler/models/gtfs/types.py
class TimepointType(IntEnum):\n    \"\"\"Indicates whether the specified time is exact or approximate.\n\n    Full documentation: https://gtfs.org/schedule/reference\n    \"\"\"\n\n    APPROXIMATE = 0\n    EXACT = 1\n
"},{"location":"data_models/#network_wrangler.models.gtfs.TripRecord","title":"TripRecord","text":"

Bases: BaseModel

Describes trips which are sequences of two or more stops that occur at specific time.

Source code in network_wrangler/models/gtfs/records.py
class TripRecord(BaseModel):\n    \"\"\"Describes trips which are sequences of two or more stops that occur at specific time.\"\"\"\n\n    route_id: RouteID\n    service_id: ServiceID\n    trip_id: TripID\n    trip_headsign: TripHeadsign\n    trip_short_name: TripShortName\n    direction_id: DirectionID\n    block_id: BlockID\n    shape_id: ShapeID\n    wheelchair_accessible: WheelchairAccessible\n    bikes_allowed: BikesAllowed\n
"},{"location":"data_models/#network_wrangler.models.gtfs.WheelchairAccessible","title":"WheelchairAccessible","text":"

Bases: IntEnum

Indicates whether the trip is wheelchair accessible.

Full documentation: https://gtfs.org/schedule/reference

Source code in network_wrangler/models/gtfs/types.py
class WheelchairAccessible(IntEnum):\n    \"\"\"Indicates whether the trip is wheelchair accessible.\n\n    Full documentation: https://gtfs.org/schedule/reference\n    \"\"\"\n\n    NO_INFORMATION = 0\n    POSSIBLE = 1\n    NOT_POSSIBLE = 2\n
"},{"location":"data_models/#network_wrangler.models.gtfs.WranglerShapeRecord","title":"WranglerShapeRecord","text":"

Bases: ShapeRecord

Wrangler-flavored ShapeRecord.

Source code in network_wrangler/models/gtfs/records.py
class WranglerShapeRecord(ShapeRecord):\n    \"\"\"Wrangler-flavored ShapeRecord.\"\"\"\n\n    shape_model_node_id: int\n
"},{"location":"data_models/#network_wrangler.models.gtfs.WranglerStopRecord","title":"WranglerStopRecord","text":"

Bases: StopRecord

Wrangler-flavored StopRecord.

Source code in network_wrangler/models/gtfs/records.py
class WranglerStopRecord(StopRecord):\n    \"\"\"Wrangler-flavored StopRecord.\"\"\"\n\n    trip_id: TripID\n
"},{"location":"data_models/#network_wrangler.models.gtfs.WranglerStopTimeRecord","title":"WranglerStopTimeRecord","text":"

Bases: StopTimeRecord

Wrangler-flavored StopTimeRecord.

Source code in network_wrangler/models/gtfs/records.py
class WranglerStopTimeRecord(StopTimeRecord):\n    \"\"\"Wrangler-flavored StopTimeRecord.\"\"\"\n\n    model_node_id: int\n\n    model_config = ConfigDict(\n        protected_namespaces=(),\n    )\n
"},{"location":"data_models/#project-cards","title":"Project Cards","text":""},{"location":"data_models/#projects","title":"Projects","text":"

For roadway deletion project card (e.g. to delete).

Pydantic models for roadway property changes which align with ProjectCard schemas.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_deletion.RoadwayDeletion","title":"RoadwayDeletion","text":"

Bases: RecordModel

Requirements for describing roadway deletion project card (e.g. to delete).

Source code in network_wrangler/models/projects/roadway_deletion.py
class RoadwayDeletion(RecordModel):\n    \"\"\"Requirements for describing roadway deletion project card (e.g. to delete).\"\"\"\n\n    require_any_of: ClassVar[AnyOf] = [[\"links\", \"nodes\"]]\n    model_config = ConfigDict(extra=\"forbid\")\n\n    links: Optional[SelectLinksDict] = None\n    nodes: Optional[SelectNodesDict] = None\n    clean_shapes: Optional[bool] = False\n    clean_nodes: Optional[bool] = False\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.GroupedScopedPropertySetItem","title":"GroupedScopedPropertySetItem","text":"

Bases: BaseModel

Value for setting property value for a single time of day and category.

Source code in network_wrangler/models/projects/roadway_property_change.py
class GroupedScopedPropertySetItem(BaseModel):\n    \"\"\"Value for setting property value for a single time of day and category.\"\"\"\n\n    model_config = ConfigDict(extra=\"forbid\", exclude_none=True)\n\n    category: Optional[Union[str, int]] = None\n    timespan: Optional[TimespanString] = None\n    categories: Optional[list[Any]] = []\n    timespans: Optional[list[TimespanString]] = []\n    set: Optional[Any] = None\n    existing: Optional[Any] = None\n    change: Optional[Union[int, float]] = None\n    _examples = [\n        {\"category\": \"hov3\", \"timespan\": [\"6:00\", \"9:00\"], \"set\": 2.0},\n        {\"category\": \"hov2\", \"set\": 2.0},\n        {\"timespan\": [\"12:00\", \"2:00\"], \"change\": -1},\n    ]\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_set_or_change(cls, data: dict):\n        \"\"\"Validate that each item has a set or change value.\"\"\"\n        if not isinstance(data, dict):\n            return data\n        if \"set\" in data and \"change\" in data:\n            WranglerLogger.warning(\"Both set and change are set. Ignoring change.\")\n            data[\"change\"] = None\n        return data\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_categories_or_timespans(cls, data: Any) -> Any:\n        \"\"\"Validate that each item has a category or timespan value.\"\"\"\n        if not isinstance(data, dict):\n            return data\n        require_any_of = [\"category\", \"timespan\", \"categories\", \"timespans\"]\n        if not any([attr in data for attr in require_any_of]):\n            raise ValidationError(f\"Require at least one of {require_any_of}\")\n        return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.GroupedScopedPropertySetItem.check_categories_or_timespans","title":"check_categories_or_timespans(data) classmethod","text":"

Validate that each item has a category or timespan value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_categories_or_timespans(cls, data: Any) -> Any:\n    \"\"\"Validate that each item has a category or timespan value.\"\"\"\n    if not isinstance(data, dict):\n        return data\n    require_any_of = [\"category\", \"timespan\", \"categories\", \"timespans\"]\n    if not any([attr in data for attr in require_any_of]):\n        raise ValidationError(f\"Require at least one of {require_any_of}\")\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.GroupedScopedPropertySetItem.check_set_or_change","title":"check_set_or_change(data) classmethod","text":"

Validate that each item has a set or change value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_set_or_change(cls, data: dict):\n    \"\"\"Validate that each item has a set or change value.\"\"\"\n    if not isinstance(data, dict):\n        return data\n    if \"set\" in data and \"change\" in data:\n        WranglerLogger.warning(\"Both set and change are set. Ignoring change.\")\n        data[\"change\"] = None\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.IndivScopedPropertySetItem","title":"IndivScopedPropertySetItem","text":"

Bases: BaseModel

Value for setting property value for a single time of day and category.

Source code in network_wrangler/models/projects/roadway_property_change.py
class IndivScopedPropertySetItem(BaseModel):\n    \"\"\"Value for setting property value for a single time of day and category.\"\"\"\n\n    model_config = ConfigDict(extra=\"forbid\", exclude_none=True)\n\n    category: Optional[Union[str, int]] = DEFAULT_CATEGORY\n    timespan: Optional[TimespanString] = DEFAULT_TIMESPAN\n    set: Optional[Any] = None\n    existing: Optional[Any] = None\n    change: Optional[Union[int, float]] = None\n    _examples = [\n        {\"category\": \"hov3\", \"timespan\": [\"6:00\", \"9:00\"], \"set\": 2.0},\n        {\"category\": \"hov2\", \"set\": 2.0},\n        {\"timespan\": [\"12:00\", \"2:00\"], \"change\": -1},\n    ]\n\n    @property\n    def timespan_dt(self) -> list[list[datetime]]:\n        \"\"\"Convert timespan to list of datetime objects.\"\"\"\n        return str_to_time_list(self.timespan)\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_set_or_change(cls, data: dict):\n        \"\"\"Validate that each item has a set or change value.\"\"\"\n        if not isinstance(data, dict):\n            return data\n        if data.get(\"set\") and data.get(\"change\"):\n            WranglerLogger.warning(\"Both set and change are set. Ignoring change.\")\n            data[\"change\"] = None\n\n        WranglerLogger.debug(f\"Data: {data}\")\n        if data.get(\"set\", None) is None and data.get(\"change\", None) is None:\n            WranglerLogger.debug(\n                f\"Must have `set` or `change` in IndivScopedPropertySetItem. \\\n                           Found: {data}\"\n            )\n            raise ValueError(\"Must have `set` or `change` in IndivScopedPropertySetItem\")\n        return data\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_categories_or_timespans(cls, data: Any) -> Any:\n        \"\"\"Validate that each item has a category or timespan value.\"\"\"\n        if not isinstance(data, dict):\n            return data\n        require_any_of = [\"category\", \"timespan\"]\n        if not any([attr in data for attr in require_any_of]):\n            raise ValidationError(f\"Require at least one of {require_any_of}\")\n        return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.IndivScopedPropertySetItem.timespan_dt","title":"timespan_dt: list[list[datetime]] property","text":"

Convert timespan to list of datetime objects.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.IndivScopedPropertySetItem.check_categories_or_timespans","title":"check_categories_or_timespans(data) classmethod","text":"

Validate that each item has a category or timespan value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_categories_or_timespans(cls, data: Any) -> Any:\n    \"\"\"Validate that each item has a category or timespan value.\"\"\"\n    if not isinstance(data, dict):\n        return data\n    require_any_of = [\"category\", \"timespan\"]\n    if not any([attr in data for attr in require_any_of]):\n        raise ValidationError(f\"Require at least one of {require_any_of}\")\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.IndivScopedPropertySetItem.check_set_or_change","title":"check_set_or_change(data) classmethod","text":"

Validate that each item has a set or change value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_set_or_change(cls, data: dict):\n    \"\"\"Validate that each item has a set or change value.\"\"\"\n    if not isinstance(data, dict):\n        return data\n    if data.get(\"set\") and data.get(\"change\"):\n        WranglerLogger.warning(\"Both set and change are set. Ignoring change.\")\n        data[\"change\"] = None\n\n    WranglerLogger.debug(f\"Data: {data}\")\n    if data.get(\"set\", None) is None and data.get(\"change\", None) is None:\n        WranglerLogger.debug(\n            f\"Must have `set` or `change` in IndivScopedPropertySetItem. \\\n                       Found: {data}\"\n        )\n        raise ValueError(\"Must have `set` or `change` in IndivScopedPropertySetItem\")\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.NodeGeometryChange","title":"NodeGeometryChange","text":"

Bases: RecordModel

Value for setting node geometry given a model_node_id.

Source code in network_wrangler/models/projects/roadway_property_change.py
class NodeGeometryChange(RecordModel):\n    \"\"\"Value for setting node geometry given a model_node_id.\"\"\"\n\n    model_config = ConfigDict(extra=\"ignore\")\n    X: float\n    Y: float\n    in_crs: Optional[int] = LAT_LON_CRS\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.NodeGeometryChangeTable","title":"NodeGeometryChangeTable","text":"

Bases: DataFrameModel

DataFrameModel for setting node geometry given a model_node_id.

Source code in network_wrangler/models/projects/roadway_property_change.py
class NodeGeometryChangeTable(DataFrameModel):\n    \"\"\"DataFrameModel for setting node geometry given a model_node_id.\"\"\"\n\n    model_node_id: Series[int]\n    X: Series[float] = Field(coerce=True)\n    Y: Series[float] = Field(coerce=True)\n    in_crs: Series[int] = Field(default=LAT_LON_CRS)\n\n    class Config:\n        \"\"\"Config for NodeGeometryChangeTable.\"\"\"\n\n        add_missing_columns = True\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.NodeGeometryChangeTable.Config","title":"Config","text":"

Config for NodeGeometryChangeTable.

Source code in network_wrangler/models/projects/roadway_property_change.py
class Config:\n    \"\"\"Config for NodeGeometryChangeTable.\"\"\"\n\n    add_missing_columns = True\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.RoadPropertyChange","title":"RoadPropertyChange","text":"

Bases: RecordModel

Value for setting property value for a time of day and category.

Source code in network_wrangler/models/projects/roadway_property_change.py
class RoadPropertyChange(RecordModel):\n    \"\"\"Value for setting property value for a time of day and category.\"\"\"\n\n    model_config = ConfigDict(extra=\"forbid\", exclude_none=True)\n\n    existing: Optional[Any] = None\n    change: Optional[Union[int, float]] = None\n    set: Optional[Any] = None\n    scoped: Optional[Union[None, ScopedPropertySetList]] = None\n\n    require_one_of: ClassVar[OneOf] = [[\"change\", \"set\"]]\n\n    _examples = [\n        {\"set\": 1},\n        {\"existing\": 2, \"change\": -1},\n        {\n            \"set\": 0,\n            \"scoped\": [\n                {\"timespan\": [\"6:00\", \"9:00\"], \"value\": 2.0},\n                {\"timespan\": [\"9:00\", \"15:00\"], \"value\": 4.0},\n            ],\n        },\n        {\n            \"set\": 0,\n            \"scoped\": [\n                {\n                    \"categories\": [\"hov3\", \"hov2\"],\n                    \"timespan\": [\"6:00\", \"9:00\"],\n                    \"value\": 2.0,\n                },\n                {\"category\": \"truck\", \"timespan\": [\"6:00\", \"9:00\"], \"value\": 4.0},\n            ],\n        },\n        {\n            \"set\": 0,\n            \"scoped\": [\n                {\"categories\": [\"hov3\", \"hov2\"], \"value\": 2.0},\n                {\"category\": \"truck\", \"value\": 4.0},\n            ],\n        },\n    ]\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopeConflictError","title":"ScopeConflictError","text":"

Bases: Exception

Raised when there is a scope conflict in a list of ScopedPropertySetItems.

Source code in network_wrangler/models/projects/roadway_property_change.py
class ScopeConflictError(Exception):\n    \"\"\"Raised when there is a scope conflict in a list of ScopedPropertySetItems.\"\"\"\n\n    pass\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList","title":"ScopedPropertySetList","text":"

Bases: RootListMixin, RootModel

List of ScopedPropertySetItems used to evaluate and apply changes to roadway properties.

Source code in network_wrangler/models/projects/roadway_property_change.py
class ScopedPropertySetList(RootListMixin, RootModel):\n    \"\"\"List of ScopedPropertySetItems used to evaluate and apply changes to roadway properties.\"\"\"\n\n    root: list[IndivScopedPropertySetItem]\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_set_or_change(cls, data: list):\n        \"\"\"Validate that each item has a set or change value.\"\"\"\n        data = _grouped_to_indiv_list_of_scopedpropsetitem(data)\n        return data\n\n    @model_validator(mode=\"after\")\n    def check_conflicting_scopes(self):\n        \"\"\"Check for conflicting scopes in the list of ScopedPropertySetItem.\"\"\"\n        conflicts = []\n        for i in self:\n            if i.timespan == DEFAULT_TIMESPAN:\n                continue\n            overlapping_ts_i = self.overlapping_timespans(i.timespan)\n            for j in overlapping_ts_i:\n                if j == i:\n                    continue\n                if j.category == i.category:\n                    conflicts.append((i, j))\n        if conflicts:\n            WranglerLogger.error(\"Found conflicting scopes in ScopedPropertySetList:\\n{conflicts}\")\n            raise ScopeConflictError(\"Conflicting scopes in ScopedPropertySetList\")\n\n        return self\n\n    def overlapping_timespans(self, timespan: TimespanString) -> list[IndivScopedPropertySetItem]:\n        \"\"\"Return a list of items that overlap with the given timespan.\"\"\"\n        timespan_dt = str_to_time_list(timespan)\n        return [i for i in self if dt_overlaps(i.timespan_dt, timespan_dt)]\n\n    @property\n    def change_items(self) -> list[IndivScopedPropertySetItem]:\n        \"\"\"Filter out items that do not have a change value.\"\"\"\n        WranglerLogger.debug(f\"self.root[0]: {self.root[0]}\")\n        return [i for i in self if i.change is not None]\n\n    @property\n    def set_items(self):\n        \"\"\"Filter out items that do not have a set value.\"\"\"\n        return [i for i in self if i.set is not None]\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.change_items","title":"change_items: list[IndivScopedPropertySetItem] property","text":"

Filter out items that do not have a change value.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.set_items","title":"set_items property","text":"

Filter out items that do not have a set value.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.check_conflicting_scopes","title":"check_conflicting_scopes()","text":"

Check for conflicting scopes in the list of ScopedPropertySetItem.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"after\")\ndef check_conflicting_scopes(self):\n    \"\"\"Check for conflicting scopes in the list of ScopedPropertySetItem.\"\"\"\n    conflicts = []\n    for i in self:\n        if i.timespan == DEFAULT_TIMESPAN:\n            continue\n        overlapping_ts_i = self.overlapping_timespans(i.timespan)\n        for j in overlapping_ts_i:\n            if j == i:\n                continue\n            if j.category == i.category:\n                conflicts.append((i, j))\n    if conflicts:\n        WranglerLogger.error(\"Found conflicting scopes in ScopedPropertySetList:\\n{conflicts}\")\n        raise ScopeConflictError(\"Conflicting scopes in ScopedPropertySetList\")\n\n    return self\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.check_set_or_change","title":"check_set_or_change(data) classmethod","text":"

Validate that each item has a set or change value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_set_or_change(cls, data: list):\n    \"\"\"Validate that each item has a set or change value.\"\"\"\n    data = _grouped_to_indiv_list_of_scopedpropsetitem(data)\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.overlapping_timespans","title":"overlapping_timespans(timespan)","text":"

Return a list of items that overlap with the given timespan.

Source code in network_wrangler/models/projects/roadway_property_change.py
def overlapping_timespans(self, timespan: TimespanString) -> list[IndivScopedPropertySetItem]:\n    \"\"\"Return a list of items that overlap with the given timespan.\"\"\"\n    timespan_dt = str_to_time_list(timespan)\n    return [i for i in self if dt_overlaps(i.timespan_dt, timespan_dt)]\n
"},{"location":"data_models/#roadway-selections","title":"Roadway Selections","text":"

Pydantic Roadway Selection Models which should align with ProjectCard data models.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectFacility","title":"SelectFacility","text":"

Bases: RecordModel

Roadway Facility Selection.

Source code in network_wrangler/models/projects/roadway_selection.py
class SelectFacility(RecordModel):\n    \"\"\"Roadway Facility Selection.\"\"\"\n\n    require_one_of: ClassVar[OneOf] = [[\"links\", \"nodes\", [\"links\", \"from\", \"to\"]]]\n    model_config = ConfigDict(extra=\"forbid\")\n\n    links: Optional[SelectLinksDict] = None\n    nodes: Optional[SelectNodesDict] = None\n    from_: Annotated[Optional[SelectNodeDict], Field(None, alias=\"from\")]\n    to: Optional[SelectNodeDict] = None\n\n    _examples = [\n        {\n            \"links\": {\"name\": [\"Main Street\"]},\n            \"from\": {\"model_node_id\": 1},\n            \"to\": {\"model_node_id\": 2},\n        },\n        {\"nodes\": {\"osm_node_id\": [\"1\", \"2\", \"3\"]}},\n        {\"nodes\": {\"model_node_id\": [1, 2, 3]}},\n        {\"links\": {\"model_link_id\": [1, 2, 3]}},\n    ]\n\n    @property\n    def feature_types(self) -> str:\n        \"\"\"One of `segment`, `links`, or `nodes`.\"\"\"\n        if self.links and self.from_ and self.to:\n            return \"segment\"\n        if self.links:\n            return \"links\"\n        if self.nodes:\n            return \"nodes\"\n        raise ValueError(\"SelectFacility must have either links or nodes defined.\")\n\n    @property\n    def selection_type(self) -> str:\n        \"\"\"One of `segment`, `links`, or `nodes`.\"\"\"\n        if self.feature_types == \"segment\":\n            return \"segment\"\n        if self.feature_types == \"links\":\n            return self.links.selection_type\n        if self.feature_types == \"nodes\":\n            return self.nodes.selection_type\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectFacility.feature_types","title":"feature_types: str property","text":"

One of segment, links, or nodes.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectFacility.selection_type","title":"selection_type: str property","text":"

One of segment, links, or nodes.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict","title":"SelectLinksDict","text":"

Bases: RecordModel

requirements for describing links in the facility section of a project card.

Examples:

    {'name': ['Main St'], 'modes': ['drive']}\n    {'osm_link_id': ['123456789']}\n    {'model_link_id': [123456789], 'modes': ['walk']}\n    {'all': 'True', 'modes': ['transit']}\n
Source code in network_wrangler/models/projects/roadway_selection.py
class SelectLinksDict(RecordModel):\n    \"\"\"requirements for describing links in the `facility` section of a project card.\n\n    Examples:\n    ```python\n        {'name': ['Main St'], 'modes': ['drive']}\n        {'osm_link_id': ['123456789']}\n        {'model_link_id': [123456789], 'modes': ['walk']}\n        {'all': 'True', 'modes': ['transit']}\n    ```\n\n    \"\"\"\n\n    require_conflicts: ClassVar[ConflictsWith] = [\n        [\"all\", \"osm_link_id\"],\n        [\"all\", \"model_link_id\"],\n        [\"all\", \"name\"],\n        [\"all\", \"ref\"],\n        [\"osm_link_id\", \"model_link_id\"],\n        [\"osm_link_id\", \"name\"],\n        [\"model_link_id\", \"name\"],\n    ]\n    require_any_of: ClassVar[AnyOf] = [[\"name\", \"ref\", \"osm_link_id\", \"model_link_id\", \"all\"]]\n    _initial_selection_fields: ClassVar[list[str]] = [\n        \"name\",\n        \"ref\",\n        \"osm_link_id\",\n        \"model_link_id\",\n        \"all\",\n    ]\n    _explicit_id_fields: ClassVar[list[str]] = [\"osm_link_id\", \"model_link_id\"]\n    _segment_id_fields: ClassVar[list[str]] = [\n        \"name\",\n        \"ref\",\n        \"osm_link_id\",\n        \"model_link_id\",\n        \"modes\",\n    ]\n    _special_fields: ClassVar[list[str]] = [\"modes\", \"ignore_missing\"]\n    model_config = ConfigDict(extra=\"allow\")\n\n    all: Optional[bool] = False\n    name: Annotated[Optional[list[str]], Field(None, min_length=1)]\n    ref: Annotated[Optional[list[str]], Field(None, min_length=1)]\n    osm_link_id: Annotated[Optional[list[str]], Field(None, min_length=1)]\n    model_link_id: Annotated[Optional[list[int]], Field(None, min_length=1)]\n    modes: list[str] = DEFAULT_SEARCH_MODES\n    ignore_missing: Optional[bool] = True\n\n    _examples = [\n        {\"name\": [\"Main St\"], \"modes\": [\"drive\"]},\n        {\"osm_link_id\": [\"123456789\"]},\n        {\"model_link_id\": [123456789], \"modes\": [\"walk\"]},\n        {\"all\": \"True\", \"modes\": [\"transit\"]},\n    ]\n\n    @property\n    def asdict(self) -> dict:\n        \"\"\"Model as a dictionary.\"\"\"\n        return self.model_dump(exclude_none=True, by_alias=True)\n\n    @property\n    def fields(self) -> list[str]:\n        \"\"\"All fields in the selection.\"\"\"\n        return list(self.model_dump(exclude_none=True, by_alias=True).keys())\n\n    @property\n    def initial_selection_fields(self) -> list[str]:\n        \"\"\"Fields used in the initial selection of links.\"\"\"\n        if self.all:\n            return [\"all\"]\n        return [f for f in self._initial_selection_fields if getattr(self, f)]\n\n    @property\n    def explicit_id_fields(self) -> list[str]:\n        \"\"\"Fields that can be used in a slection on their own.\n\n        e.g. `osm_link_id` and `model_link_id`.\n        \"\"\"\n        return [k for k in self._explicit_id_fields if getattr(self, k)]\n\n    @property\n    def segment_id_fields(self) -> list[str]:\n        \"\"\"Fields that can be used in an intial segment selection.\n\n        e.g. `name`, `ref`, `osm_link_id`, or `model_link_id`.\n        \"\"\"\n        return [k for k in self._segment_id_fields if getattr(self, k)]\n\n    @property\n    def additional_selection_fields(self):\n        \"\"\"Return a list of fields that are not part of the initial selection fields.\"\"\"\n        _potential = list(\n            set(self.fields) - set(self.initial_selection_fields) - set(self._special_fields)\n        )\n        return [f for f in _potential if getattr(self, f)]\n\n    @property\n    def selection_type(self):\n        \"\"\"One of `all`, `explicit_ids`, or `segment`.\"\"\"\n        if self.all:\n            return \"all\"\n        if self.explicit_id_fields:\n            return \"explicit_ids\"\n        if self.segment_id_fields:\n            return \"segment\"\n        else:\n            raise SelectionFormatError(\n                \"If not a segment, Select Links should have either `all` or an explicit id.\"\n            )\n\n    @property\n    def explicit_id_selection_dict(self):\n        \"\"\"Return a dictionary of fields that are explicit ids.\"\"\"\n        return {k: v for k, v in self.asdict.items() if k in self.explicit_id_fields}\n\n    @property\n    def segment_selection_dict(self):\n        \"\"\"Return a dictionary of fields that are explicit ids.\"\"\"\n        return {k: v for k, v in self.asdict.items() if k in self.segment_id_fields}\n\n    @property\n    def additional_selection_dict(self):\n        \"\"\"Return a dictionary of fields that are not part of the initial selection fields.\"\"\"\n        return {k: v for k, v in self.asdict.items() if k in self.additional_selection_fields}\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.additional_selection_dict","title":"additional_selection_dict property","text":"

Return a dictionary of fields that are not part of the initial selection fields.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.additional_selection_fields","title":"additional_selection_fields property","text":"

Return a list of fields that are not part of the initial selection fields.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.asdict","title":"asdict: dict property","text":"

Model as a dictionary.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.explicit_id_fields","title":"explicit_id_fields: list[str] property","text":"

Fields that can be used in a slection on their own.

e.g. osm_link_id and model_link_id.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.explicit_id_selection_dict","title":"explicit_id_selection_dict property","text":"

Return a dictionary of fields that are explicit ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.fields","title":"fields: list[str] property","text":"

All fields in the selection.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.initial_selection_fields","title":"initial_selection_fields: list[str] property","text":"

Fields used in the initial selection of links.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.segment_id_fields","title":"segment_id_fields: list[str] property","text":"

Fields that can be used in an intial segment selection.

e.g. name, ref, osm_link_id, or model_link_id.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.segment_selection_dict","title":"segment_selection_dict property","text":"

Return a dictionary of fields that are explicit ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.selection_type","title":"selection_type property","text":"

One of all, explicit_ids, or segment.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict","title":"SelectNodeDict","text":"

Bases: RecordModel

Selection of a single roadway node in the facility section of a project card.

Source code in network_wrangler/models/projects/roadway_selection.py
class SelectNodeDict(RecordModel):\n    \"\"\"Selection of a single roadway node in the `facility` section of a project card.\"\"\"\n\n    require_one_of: ClassVar[OneOf] = [[\"osm_node_id\", \"model_node_id\"]]\n    initial_selection_fields: ClassVar[list[str]] = [\"osm_node_id\", \"model_node_id\"]\n    explicit_id_fields: ClassVar[list[str]] = [\"osm_node_id\", \"model_node_id\"]\n    model_config = ConfigDict(extra=\"allow\")\n\n    osm_node_id: Optional[str] = None\n    model_node_id: Optional[int] = None\n\n    _examples = [{\"osm_node_id\": \"12345\"}, {\"model_node_id\": 67890}]\n\n    @property\n    def selection_type(self):\n        \"\"\"One of `all` or `explicit_ids`.\"\"\"\n        _explicit_ids = [k for k in self.explicit_id_fields if getattr(self, k)]\n        if _explicit_ids:\n            return \"explicit_ids\"\n        WranglerLogger.debug(\n            f\"SelectNode should have an explicit id: {self.explicit_id_fields} \\\n                Found none in selection dict: \\n{self.model_dump(by_alias=True)}\"\n        )\n        raise SelectionFormatError(\"Select Node should have either `all` or an explicit id.\")\n\n    @property\n    def explicit_id_selection_dict(self) -> dict:\n        \"\"\"Return a dictionary of field that are explicit ids.\"\"\"\n        return {\n            k: [v]\n            for k, v in self.model_dump(exclude_none=True, by_alias=True).items()\n            if k in self.explicit_id_fields\n        }\n\n    @property\n    def additional_selection_fields(self) -> list[str]:\n        \"\"\"Return a list of fields that are not part of the initial selection fields.\"\"\"\n        return list(\n            set(self.model_dump(exclude_none=True, by_alias=True).keys())\n            - set(self.initial_selection_fields)\n        )\n\n    @property\n    def additional_selection_dict(self) -> dict:\n        \"\"\"Return a dictionary of fields that are not part of the initial selection fields.\"\"\"\n        return {\n            k: v\n            for k, v in self.model_dump(exclude_none=True, by_alias=True).items()\n            if k in self.additional_selection_fields\n        }\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict.additional_selection_dict","title":"additional_selection_dict: dict property","text":"

Return a dictionary of fields that are not part of the initial selection fields.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict.additional_selection_fields","title":"additional_selection_fields: list[str] property","text":"

Return a list of fields that are not part of the initial selection fields.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict.explicit_id_selection_dict","title":"explicit_id_selection_dict: dict property","text":"

Return a dictionary of field that are explicit ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict.selection_type","title":"selection_type property","text":"

One of all or explicit_ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict","title":"SelectNodesDict","text":"

Bases: RecordModel

Requirements for describing multiple nodes of a project card (e.g. to delete).

Source code in network_wrangler/models/projects/roadway_selection.py
class SelectNodesDict(RecordModel):\n    \"\"\"Requirements for describing multiple nodes of a project card (e.g. to delete).\"\"\"\n\n    require_any_of: ClassVar[AnyOf] = [[\"osm_node_id\", \"model_node_id\"]]\n    _explicit_id_fields: ClassVar[list[str]] = [\"osm_node_id\", \"model_node_id\"]\n    model_config = ConfigDict(extra=\"forbid\")\n\n    all: Optional[bool] = False\n    osm_node_id: Annotated[Optional[list[str]], Field(None, min_length=1)]\n    model_node_id: Annotated[Optional[list[int]], Field(min_length=1)]\n    ignore_missing: Optional[bool] = True\n\n    _examples = [\n        {\"osm_node_id\": [\"12345\", \"67890\"], \"model_node_id\": [12345, 67890]},\n        {\"osm_node_id\": [\"12345\", \"67890\"]},\n        {\"model_node_id\": [12345, 67890]},\n    ]\n\n    @property\n    def asdict(self) -> dict:\n        \"\"\"Model as a dictionary.\"\"\"\n        return self.model_dump(exclude_none=True, by_alias=True)\n\n    @property\n    def fields(self) -> list[str]:\n        \"\"\"List of fields in the selection.\"\"\"\n        return list(self.model_dump(exclude_none=True, by_alias=True).keys())\n\n    @property\n    def selection_type(self):\n        \"\"\"One of `all` or `explicit_ids`.\"\"\"\n        if self.all:\n            return \"all\"\n        if self.explicit_id_fields:\n            return \"explicit_ids\"\n        WranglerLogger.debug(\n            f\"SelectNodes should have either `all` or an explicit id: {self.explicit_id_fields}. \\\n                Found neither in nodes selection: \\n{self.model_dump(by_alias=True)}\"\n        )\n        raise SelectionFormatError(\"Select Node should have either `all` or an explicit id.\")\n\n    @property\n    def explicit_id_fields(self) -> list[str]:\n        \"\"\"Fields which can be used in a selection on their own.\"\"\"\n        return [k for k in self._explicit_id_fields if getattr(self, k)]\n\n    @property\n    def explicit_id_selection_dict(self):\n        \"\"\"Return a dictionary of fields that are explicit ids.\"\"\"\n        return {k: v for k, v in self.asdict.items() if k in self.explicit_id_fields}\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.asdict","title":"asdict: dict property","text":"

Model as a dictionary.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.explicit_id_fields","title":"explicit_id_fields: list[str] property","text":"

Fields which can be used in a selection on their own.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.explicit_id_selection_dict","title":"explicit_id_selection_dict property","text":"

Return a dictionary of fields that are explicit ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.fields","title":"fields: list[str] property","text":"

List of fields in the selection.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.selection_type","title":"selection_type property","text":"

One of all or explicit_ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectionFormatError","title":"SelectionFormatError","text":"

Bases: Exception

Raised when there is an issue with the selection format.

Source code in network_wrangler/models/projects/roadway_selection.py
class SelectionFormatError(Exception):\n    \"\"\"Raised when there is an issue with the selection format.\"\"\"\n\n    pass\n
"},{"location":"data_models/#transit-selections","title":"Transit Selections","text":"

Pydantic data models for transit selection properties.

"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectRouteProperties","title":"SelectRouteProperties","text":"

Bases: RecordModel

Selection properties for transit routes.

Source code in network_wrangler/models/projects/transit_selection.py
class SelectRouteProperties(RecordModel):\n    \"\"\"Selection properties for transit routes.\"\"\"\n\n    route_short_name: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    route_long_name: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    agency_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    route_type: Annotated[Optional[List[int]], Field(None, min_length=1)]\n\n    model_config = ConfigDict(\n        extra=\"allow\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectTransitLinks","title":"SelectTransitLinks","text":"

Bases: RecordModel

Requirements for describing multiple transit links of a project card.

Source code in network_wrangler/models/projects/transit_selection.py
class SelectTransitLinks(RecordModel):\n    \"\"\"Requirements for describing multiple transit links of a project card.\"\"\"\n\n    require_one_of: ClassVar[OneOf] = [\n        [\"ab_nodes\", \"model_link_id\"],\n    ]\n\n    model_link_id: Annotated[Optional[List[int]], Field(min_length=1)] = None\n    ab_nodes: Annotated[Optional[List[TransitABNodesModel]], Field(min_length=1)] = None\n    require: Optional[SelectionRequire] = \"any\"\n\n    model_config = ConfigDict(\n        extra=\"forbid\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n    _examples = [\n        {\n            \"ab_nodes\": [{\"A\": \"75520\", \"B\": \"66380\"}, {\"A\": \"66380\", \"B\": \"75520\"}],\n            \"type\": \"any\",\n        },\n        {\n            \"model_link_id\": [123, 321],\n            \"type\": \"all\",\n        },\n    ]\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectTransitNodes","title":"SelectTransitNodes","text":"

Bases: RecordModel

Requirements for describing multiple transit nodes of a project card (e.g. to delete).

Source code in network_wrangler/models/projects/transit_selection.py
class SelectTransitNodes(RecordModel):\n    \"\"\"Requirements for describing multiple transit nodes of a project card (e.g. to delete).\"\"\"\n\n    require_any_of: ClassVar[AnyOf] = [\n        [\n            # \"stop_id\", TODO Not implemented\n            \"model_node_id\",\n        ]\n    ]\n\n    # stop_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)] TODO Not implemented\n    model_node_id: Annotated[List[int], Field(min_length=1)]\n    require: Optional[SelectionRequire] = \"any\"\n\n    model_config = ConfigDict(\n        extra=\"forbid\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n\n    _examples = [\n        # {\"stop_id\": [\"stop1\", \"stop2\"], \"require\": \"any\"},  TODO Not implemented\n        {\"model_node_id\": [1, 2], \"require\": \"all\"},\n    ]\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectTransitTrips","title":"SelectTransitTrips","text":"

Bases: RecordModel

Selection properties for transit trips.

Source code in network_wrangler/models/projects/transit_selection.py
class SelectTransitTrips(RecordModel):\n    \"\"\"Selection properties for transit trips.\"\"\"\n\n    trip_properties: Optional[SelectTripProperties] = None\n    route_properties: Optional[SelectRouteProperties] = None\n    timespans: Annotated[Optional[List[TimespanString]], Field(None, min_length=1)]\n    nodes: Optional[SelectTransitNodes] = None\n    links: Optional[SelectTransitLinks] = None\n\n    model_config = ConfigDict(\n        extra=\"forbid\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectTripProperties","title":"SelectTripProperties","text":"

Bases: RecordModel

Selection properties for transit trips.

Source code in network_wrangler/models/projects/transit_selection.py
class SelectTripProperties(RecordModel):\n    \"\"\"Selection properties for transit trips.\"\"\"\n\n    trip_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    shape_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    direction_id: Annotated[Optional[int], Field(None)]\n    service_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    route_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    trip_short_name: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n\n    model_config = ConfigDict(\n        extra=\"allow\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.TransitABNodesModel","title":"TransitABNodesModel","text":"

Bases: RecordModel

Single transit link model.

Source code in network_wrangler/models/projects/transit_selection.py
class TransitABNodesModel(RecordModel):\n    \"\"\"Single transit link model.\"\"\"\n\n    A: Optional[int] = None  # model_node_id\n    B: Optional[int] = None  # model_node_id\n\n    model_config = ConfigDict(\n        extra=\"forbid\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n
"},{"location":"design/","title":"Design","text":""},{"location":"design/#atomic-parts","title":"Atomic Parts","text":"

NetworkWrangler deals with four primary atomic parts:

1. Scenario objects describe a Roadway Network, Transit Network, and collection of Projects. Scenarios manage the addition and construction of projects on the network via projct cards. Scenarios can be based on or tiered from other scenarios.

2. RoadwayNetwork objects stores information about roadway nodes, directed links between nodes, and the shapes of links (note that the same shape can be shared between two or more links). Network Wrangler reads/writes roadway network objects from/to three files: links.json, shapes.geojson, and nodes.geojson. Their data is stored as GeoDataFrames in the object.

3. TransitNetwork objects contain information about stops, routes, trips, shapes, stoptimes, and frequencies. Network Wrangler reads/writes transit network information from/to gtfs csv files and stores them as DataFrames within a Partridge feed object. Transit networks can be associated with Roadway networks.

4.ProjectCard objects store infromation (including metadata) about changes to the network. Network Wtanglr reads project cards from .yml files validates them, and manages them within a Scenario object.

"},{"location":"development/","title":"Development","text":""},{"location":"development/#contributing-to-network-wrangler","title":"Contributing to Network Wrangler","text":""},{"location":"development/#roles","title":"Roles","text":""},{"location":"development/#how-to-contribute","title":"How to Contribute","text":""},{"location":"development/#setup","title":"Setup","text":"
  1. Make sure you have a GitHub account.
  2. Make sure you have git, a terminal (e.g. Mac Terminal, CygWin, etc.), and a text editor installed on your local machine. Optionally, you will likely find it easier to use GitHub Desktop, an IDE instead of a simple text editor like VSCode, Eclipse, Sublime Text, etc.
  3. Fork the repository into your own GitHub account and clone it locally.
  4. Install your network_wrangler clone in development mode: pip install . -e
  5. Install documentation requirements: pip install -r requirements.docs.txt
  6. Install development requirements: pip install -r requirements.tests.txt
  7. [Optional] Install act to run github actions locally.
"},{"location":"development/#development-workflow","title":"Development Workflow","text":"
  1. Create an issue for any features/bugs that you are working on.
  2. Create a branch to work on a new issue (or checkout an existing one where the issue is being worked on).
  3. Develop comprehensive tests in the /tests folder.
  4. Modify code including inline documentation such that it passes all tests (not just your new ones)
  5. Lint code using pre-commit run --all-files
  6. Fill out information in the pull request template
  7. Submit all pull requests to the develop branch.
  8. Core developer will review your pull request and suggest changes.
  9. After requested changes are complete, core developer will sign off on pull-request merge.

!tip: Keep pull requests small and focused. One issue is best.

!tip: Don\u2019t forget to update any associated #documentation as well!

"},{"location":"development/#documentation","title":"Documentation","text":"

Documentation is produced by mkdocs:

Documentation is built and deployed using the mike package and Github Actions configured in .github/workflows/ for each \u201cref\u201d (i.e. branch) in the network_wrangler repository.

"},{"location":"development/#testing-and-continuous-integration","title":"Testing and Continuous Integration","text":"

Tests and test data reside in the /tests directory:

Continuous Integration is managed by Github Actions in .github/workflows. All tests other than those with the decorator @pytest.mark.skipci will be run.

"},{"location":"development/#project-governance","title":"Project Governance","text":"

The project is currently governed by representatives of its two major organizational contributors:

"},{"location":"development/#code-of-conduct","title":"Code of Conduct","text":"

Contributors to the Network Wrangler Project are expected to read and follow the CODE_OF_CONDUCT for the project.

"},{"location":"development/#contributors","title":"Contributors","text":"
  1. Lisa Zorn - initial Network Wrangler implementation at SFCTA
  2. Billy Charlton
  3. Elizabeh Sall
  4. Sijia Wang
  5. David Ory
  6. Ashish K.

!Note: There are likely more contributors - feel free to add your name if we missed it!

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"

Network Wrangler is a Python library for managing travel model network scenarios.

"},{"location":"#system-requirements","title":"System Requirements","text":"

Network Wrangler should be operating system agonistic and has been tested on Ubuntu and Mac OS.

Network Wrangler does require Python 3.7+. If you have a different version of Python installed (e.g. from ArcGIS), conda or a similar virtual environment manager can care of installing it for you in the installation instructions below.

installing conda

In order to assist in installation, its helpful to have miniconda or another virtual environment manager installed to make sure the network wrangler dependencies don\u2019t interfer with any other package requirements you have. If you don\u2019t have any of these already, we recommend starting with Miniconda as it has the smallest footprint. conda is the environment manager that is contained within both the Anaconda and mini-conda applications.

"},{"location":"#installation","title":"Installation","text":"

Requirements for basic network_wranglerare functionality as well as enhanced development/testing, visualization and documentation functionalities are stored in requirements*.txt and pyproject.toml but are automatically installed when using pip.

create a new conda environment for wrangler

conda config --add channels conda-forge\nconda create python=3.7 -n wrangler\nconda activate wrangler\n

tricky dependencies

rtree, geopandas and osmnx can have some tricky co-dependencies. If don\u2019t already have an up-to-date installation of them, we\u2019ve had the best success installing them using conda (as opposed to pip).

conda install rtree geopandas osmnx\n

Ready to install network wrangler?

Latest Official VersionFrom GitHubFrom Clone
pip install network-wrangler\n
pip install git+https://github.com/wsp-sag/network_wrangler.git@master#egg=network_wrangler\n

Note

If you wanted to install from a specific tag/version number or branch, replace @master with @<branchname> or @tag

If you are going to be working on Network Wrangler locally, you might want to clone it to your local machine and install it from the clone. The -e will install it in editable mode.

If you have GitHub desktop installed, you can either do this by using the GitHub user interface by clicking on the green button \u201cclone or download\u201d in the main network wrangler repository page.

Otherwise, you can use the command prompt to navigate to the directory that you would like to store your network wrangler clone and then using a git command to clone it.

cd path to where you want to put wrangler\ngit clone https://github.com/wsp-sag/network_wrangler\n

Expected output:

cloning into network_wrangler...\nremote: Enumerating objects: 53, done.\nremote: Counting objects: 100% (53/53), done.\nremote: Compressing objects: 100% (34/34), done.\nremote: Total 307 (delta 28), reused 29 (delta 19), pack-reused 254\nReceiving objects: 100% (307/307), 15.94 MiB | 10.49 MiB/s, done.\nResolving deltas: 100% (140/140), done.\n

Then you should be able to install Network Wrangler in \u201cdevelop\u201d mode.

Navigate your command prompt into the network wrangler folder and then install network wrangler in editable mode. This will take a few minutes because it is also installing all the prerequisites.

cd network_wrangler\npip install -e .\n
"},{"location":"#common-installation-issues","title":"Common Installation Issues","text":"

Issue: clang: warning: libstdc++ is deprecated; move to libc++ with a minimum deployment target of OS X 10.9 [-Wdeprecated] If you are using MacOS, you might need to update your xcode command line tools and headers

Issue: OSError: Could not find libspatialindex_c library file* Try installing rtree on its own from the Anaconda cloud

conda install rtree\n

Issue: Shapely, a pre-requisite, doesn\u2019t install propertly because it is missing GEOS module Try installing shapely on its own from the Anaconda cloud

conda install shapely\n

Issue: Conda is unable to install a library or to update to a specific library version Try installing libraries from conda-forge

conda install -c conda-forge *library*\n

Issue: User does not have permission to install in directories Try running Anaconda Prompt as an administrator.

"},{"location":"#quickstart","title":"Quickstart","text":"

To get a feel for the API and using project cards, please refer to the \u201cWrangler Quickstart\u201d jupyter notebook.

To start the notebook, open a command line in the network_wrangler top-level directory and type:

jupyter notebook

"},{"location":"#documentation","title":"Documentation","text":"

Documentation can be built from the /docs folder using the command: make html

"},{"location":"#usage","title":"Usage","text":"
import network_wrangler\n\n##todo this is just an example for now\n\nnetwork_wrangler.setup_logging()\n\n## Network Manipulation\nmy_network = network_wrangler.read_roadway_network(...) # returns\nmy_network.apply_project_card(...) # returns\nmy_network.write_roadway_network(...) # returns\n\n## Scenario Building\nmy_scenario = network_wrangler.create_scenario(\n        base_scenario=my_base_scenario,\n        card_search_dir=project_card_directory,\n        tags = [\"baseline-2050\"]\n        )\nmy_scenario.apply_all_projects()\nmy_scenario.write(\"my_project/baseline\", \"baseline-2050\")\nmy_scenario.summarize(outfile=\"scenario_summary_baseline.txt\")\n\nmy_scenario.add_projects_from_files(list_of_build_project_card_files)\nmy_scenario.queued_projects\nmy_scenario.apply_all_projects()\nmy_scenario.write(\"my_project/build\", \"baseline\")\n
"},{"location":"#attribution","title":"Attribution","text":"

This project is built upon the ideas and concepts implemented in the network wrangler project by the San Francisco County Transportation Authority and expanded upon by the Metropolitan Transportation Commission.

While Network Wrangler as written here is based on these concepts, the code is distinct and builds upon other packages such as geopandas and pydantic which hadn\u2019t been implemented when networkwrangler 1.0 was developed.

"},{"location":"#contributing","title":"Contributing","text":"

Pull requests are welcome. Please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate.

"},{"location":"#license","title":"License","text":"

Apache-2.0

"},{"location":"api/","title":"API Documentation","text":""},{"location":"api/#common-usage","title":"Common Usage","text":""},{"location":"api/#base-objects","title":"Base Objects","text":"

Scenario class and related functions for managing a scenario.

Usage:

my_base_year_scenario = {\n    \"road_net\": load_roadway(\n        links_file=STPAUL_LINK_FILE,\n        nodes_file=STPAUL_NODE_FILE,\n        shapes_file=STPAUL_SHAPE_FILE,\n    ),\n    \"transit_net\": load_transit(STPAUL_DIR),\n}\n\n# create a future baseline scenario from base by searching for all cards in dir w/ baseline tag\nproject_card_directory = os.path.join(STPAUL_DIR, \"project_cards\")\nmy_scenario = create_scenario(\n    base_scenario=my_base_year_scenario,\n    card_search_dir=project_card_directory,\n    filter_tags = [ \"baseline2050\" ]\n)\n\n# check project card queue and then apply the projects\nmy_scenario.queued_projects\nmy_scenario.apply_all_projects()\n\n# check applied projects, write it out, and create a summary report.\nmy_scenario.applied_projects\nmy_scenario.write(\"baseline\")\nmy_scenario.summarize(outfile = \"baseline2050summary.txt\")\n\n# Add some projects to create a build scenario based on a list of files.\nbuild_card_filenames = [\n    \"3_multiple_roadway_attribute_change.yml\",\n    \"road.prop_changes.segment.yml\",\n    \"4_simple_managed_lane.yml\",\n]\nmy_scenario.add_projects_from_files(build_card_filenames)\nmy_scenario.write(\"build2050\")\nmy_scenario.summarize(outfile = \"build2050summary.txt\")\n

Roadway Network class and functions for Network Wrangler.

Used to represent a roadway network and perform operations on it.

Usage:

from network_wrangler import load_roadway_from_dir, write_roadway\n\nnet = load_roadway_from_dir(\"my_dir\")\nnet.get_selection({\"links\": [{\"name\": [\"I 35E\"]}]})\nnet.apply(\"my_project_card.yml\")\n\nwrite_roadway(net, \"my_out_prefix\", \"my_dir\", file_format = \"parquet\")\n

TransitNetwork class for representing a transit network.

Transit Networks are represented as a Wrangler-flavored GTFS Feed and optionally mapped to a RoadwayNetwork object. The TransitNetwork object is the primary object for managing transit networks in Wrangler.

Usage:

```python\nimport network_wrangler as wr\nt = wr.load_transit(stpaul_gtfs)\nt.road_net = wr.load_roadway(stpaul_roadway)\nt = t.apply(project_card)\nwrite_transit(t, \"output_dir\")\n```\n
"},{"location":"api/#network_wrangler.scenario.ProjectCardError","title":"ProjectCardError","text":"

Bases: Exception

Raised when a project card is not valid.

Source code in network_wrangler/scenario.py
class ProjectCardError(Exception):\n    \"\"\"Raised when a project card is not valid.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.scenario.Scenario","title":"Scenario","text":"

Bases: object

Holds information about a scenario.

Typical usage example:

my_base_year_scenario = {\n    \"road_net\": load_roadway(\n        links_file=STPAUL_LINK_FILE,\n        nodes_file=STPAUL_NODE_FILE,\n        shapes_file=STPAUL_SHAPE_FILE,\n    ),\n    \"transit_net\": load_transit(STPAUL_DIR),\n}\n\n# create a future baseline scenario from base by searching for all cards in dir w/ baseline tag\nproject_card_directory = os.path.join(STPAUL_DIR, \"project_cards\")\nmy_scenario = create_scenario(\n    base_scenario=my_base_year_scenario,\n    card_search_dir=project_card_directory,\n    filter_tags = [ \"baseline2050\" ]\n)\n\n# check project card queue and then apply the projects\nmy_scenario.queued_projects\nmy_scenario.apply_all_projects()\n\n# check applied projects, write it out, and create a summary report.\nmy_scenario.applied_projects\nmy_scenario.write(\"baseline\")\nmy_scenario.summarize(outfile = \"baseline2050summary.txt\")\n\n# Add some projects to create a build scenario based on a list of files.\nbuild_card_filenames = [\n    \"3_multiple_roadway_attribute_change.yml\",\n    \"road.prop_changes.segment.yml\",\n    \"4_simple_managed_lane.yml\",\n]\nmy_scenario.add_projects_from_files(build_card_filenames)\nmy_scenario.write(\"build2050\")\nmy_scenario.summarize(outfile = \"build2050summary.txt\")\n

Attributes:

Name Type Description base_scenario

dictionary representation of a scenario

road_net Optional[RoadwayNetwork]

instance of RoadwayNetwork for the scenario

transit_net Optional[TransitNetwork]

instance of TransitNetwork for the scenario

project_cards dict[str, ProjectCard]

Mapping[ProjectCard.name,ProjectCard] Storage of all project cards by name.

queued_projects

Projects which are \u201cshovel ready\u201d - have had pre-requisits checked and done any required re-ordering. Similar to a git staging, project cards aren\u2019t recognized in this collecton once they are moved to applied.

applied_projects

list of project names that have been applied

projects

list of all projects either planned, queued, or applied

prerequisites

dictionary storing prerequiste information

corequisites

dictionary storing corequisite information

conflicts

dictionary storing conflict information

Source code in network_wrangler/scenario.py
class Scenario(object):\n    \"\"\"Holds information about a scenario.\n\n    Typical usage example:\n\n    ```python\n    my_base_year_scenario = {\n        \"road_net\": load_roadway(\n            links_file=STPAUL_LINK_FILE,\n            nodes_file=STPAUL_NODE_FILE,\n            shapes_file=STPAUL_SHAPE_FILE,\n        ),\n        \"transit_net\": load_transit(STPAUL_DIR),\n    }\n\n    # create a future baseline scenario from base by searching for all cards in dir w/ baseline tag\n    project_card_directory = os.path.join(STPAUL_DIR, \"project_cards\")\n    my_scenario = create_scenario(\n        base_scenario=my_base_year_scenario,\n        card_search_dir=project_card_directory,\n        filter_tags = [ \"baseline2050\" ]\n    )\n\n    # check project card queue and then apply the projects\n    my_scenario.queued_projects\n    my_scenario.apply_all_projects()\n\n    # check applied projects, write it out, and create a summary report.\n    my_scenario.applied_projects\n    my_scenario.write(\"baseline\")\n    my_scenario.summarize(outfile = \"baseline2050summary.txt\")\n\n    # Add some projects to create a build scenario based on a list of files.\n    build_card_filenames = [\n        \"3_multiple_roadway_attribute_change.yml\",\n        \"road.prop_changes.segment.yml\",\n        \"4_simple_managed_lane.yml\",\n    ]\n    my_scenario.add_projects_from_files(build_card_filenames)\n    my_scenario.write(\"build2050\")\n    my_scenario.summarize(outfile = \"build2050summary.txt\")\n    ```\n\n    Attributes:\n        base_scenario: dictionary representation of a scenario\n        road_net: instance of RoadwayNetwork for the scenario\n        transit_net: instance of TransitNetwork for the scenario\n        project_cards: Mapping[ProjectCard.name,ProjectCard] Storage of all project cards by name.\n        queued_projects: Projects which are \"shovel ready\" - have had pre-requisits checked and\n            done any required re-ordering. Similar to a git staging, project cards aren't\n            recognized in this collecton once they are moved to applied.\n        applied_projects: list of project names that have been applied\n        projects: list of all projects either planned, queued, or applied\n        prerequisites:  dictionary storing prerequiste information\n        corequisites:  dictionary storing corequisite information\n        conflicts: dictionary storing conflict information\n    \"\"\"\n\n    def __init__(\n        self,\n        base_scenario: Union[Scenario, dict],\n        project_card_list: list[ProjectCard] = [],\n        name=\"\",\n    ):\n        \"\"\"Constructor.\n\n        Args:\n        base_scenario: A base scenario object to base this isntance off of, or a dict which\n            describes the scenario attributes including applied projects and respective conflicts.\n            `{\"applied_projects\": [],\"conflicts\":{...}}`\n        project_card_list: Optional list of ProjectCard instances to add to planned projects.\n        name: Optional name for the scenario.\n        \"\"\"\n        WranglerLogger.info(\"Creating Scenario\")\n\n        if isinstance(base_scenario, Scenario):\n            base_scenario = base_scenario.__dict__\n\n        if not set(BASE_SCENARIO_SUGGESTED_PROPS) <= set(base_scenario.keys()):\n            WranglerLogger.warning(\n                f\"Base_scenario doesn't contain {BASE_SCENARIO_SUGGESTED_PROPS}\"\n            )\n\n        self.base_scenario = base_scenario\n        self.name = name\n        # if the base scenario had roadway or transit networks, use them as the basis.\n        self.road_net: Optional[RoadwayNetwork] = copy.deepcopy(self.base_scenario.get(\"road_net\"))\n        self.transit_net: Optional[TransitNetwork] = copy.deepcopy(\n            self.base_scenario.get(\"transit_net\")\n        )\n\n        self.project_cards: dict[str, ProjectCard] = {}\n        self._planned_projects: list[str] = []\n        self._queued_projects = None\n        self.applied_projects = self.base_scenario.get(\"applied_projects\", [])\n\n        self.prerequisites = self.base_scenario.get(\"prerequisites\", {})\n        self.corequisites = self.base_scenario.get(\"corequisites\", {})\n        self.conflicts = self.base_scenario.get(\"conflicts\", {})\n\n        for p in project_card_list:\n            self._add_project(p)\n\n    @property\n    def projects(self):\n        \"\"\"Returns a list of all projects in the scenario: applied and planned.\"\"\"\n        return self.applied_projects + self._planned_projects\n\n    @property\n    def queued_projects(self):\n        \"\"\"Returns a list version of _queued_projects queue.\n\n        Queued projects are thos that have been planned, have all pre-requisites satisfied, and\n        have been ordered based on pre-requisites.\n\n        If no queued projects, will dynamically generate from planned projects based on\n        pre-requisites and return the queue.\n        \"\"\"\n        if not self._queued_projects:\n            self._check_projects_requirements_satisfied(self._planned_projects)\n            self._queued_projects = self.order_projects(self._planned_projects)\n        return list(self._queued_projects)\n\n    def __str__(self):\n        \"\"\"String representation of the Scenario object.\"\"\"\n        s = [\"{}: {}\".format(key, value) for key, value in self.__dict__.items()]\n        return \"\\n\".join(s)\n\n    def _add_dependencies(self, project_name, dependencies: dict) -> None:\n        \"\"\"Add dependencies from a project card to relevant scenario variables.\n\n        Updates existing \"prerequisites\", \"corequisites\" and \"conflicts\".\n        Lowercases everything to enable string matching.\n\n        Args:\n            project_name: name of project you are adding dependencies for.\n            dependencies: Dictionary of depndencies by dependency type and list of associated\n                projects.\n        \"\"\"\n        project_name = project_name.lower()\n\n        for d, v in dependencies.items():\n            _dep = list(map(str.lower, v))\n            WranglerLogger.debug(f\"Adding {_dep} to {project_name} dependency table.\")\n            self.__dict__[d].update({project_name: _dep})\n\n    def _add_project(\n        self,\n        project_card: ProjectCard,\n        validate: bool = True,\n        filter_tags: Collection[str] = [],\n    ) -> None:\n        \"\"\"Adds a single ProjectCard instances to the Scenario.\n\n        Checks that a project of same name is not already in scenario.\n        If selected, will validate ProjectCard before adding.\n        If provided, will only add ProjectCard if it matches at least one filter_tags.\n\n        Resets scenario queued_projects.\n\n        Args:\n            project_card (ProjectCard): ProjectCard instance to add to scenario.\n            validate (bool, optional): If True, will validate the projectcard before\n                being adding it to the scenario. Defaults to True.\n            filter_tags (Collection[str], optional): If used, will only add the project card if\n                its tags match one or more of these filter_tags. Defaults to []\n                which means no tag-filtering will occur.\n\n        \"\"\"\n        project_name = project_card.project.lower()\n        filter_tags = list(map(str.lower, filter_tags))\n\n        if project_name in self.projects:\n            raise ProjectCardError(\n                f\"Names not unique from existing scenario projects: {project_card.project}\"\n            )\n\n        if filter_tags and set(project_card.tags).isdisjoint(set(filter_tags)):\n            WranglerLogger.debug(\n                f\"Skipping {project_name} - no overlapping tags with {filter_tags}.\"\n            )\n            return\n\n        if validate:\n            assert project_card.valid\n\n        WranglerLogger.info(f\"Adding {project_name} to scenario.\")\n        self.project_cards[project_name] = project_card\n        self._planned_projects.append(project_name)\n        self._queued_projects = None\n        self._add_dependencies(project_name, project_card.dependencies)\n\n    def add_project_cards(\n        self,\n        project_card_list: Collection[ProjectCard],\n        validate: bool = True,\n        filter_tags: Collection[str] = [],\n    ) -> None:\n        \"\"\"Adds a list of ProjectCard instances to the Scenario.\n\n        Checks that a project of same name is not already in scenario.\n        If selected, will validate ProjectCard before adding.\n        If provided, will only add ProjectCard if it matches at least one filter_tags.\n\n        Args:\n            project_card_list (Collection[ProjectCard]): List of ProjectCard instances to add to\n                scenario.\n            validate (bool, optional): If True, will require each ProjectCard is validated before\n                being added to scenario. Defaults to True.\n            filter_tags (Collection[str], optional): If used, will filter ProjectCard instances\n                and only add those whose tags match one or more of these filter_tags.\n                Defaults to [] - which means no tag-filtering will occur.\n        \"\"\"\n        for p in project_card_list:\n            self._add_project(p, validate=validate, filter_tags=filter_tags)\n\n    def _check_projects_requirements_satisfied(self, project_list: Collection[str]):\n        \"\"\"Checks all requirements are satisified to apply this specific set of projects.\n\n        Including:\n        1. has an associaed project card\n        2. is in scenario's planned projects\n        3. pre-requisites satisfied\n        4. co-requisies satisfied by applied or co-applied projects\n        5. no conflicing applied or co-applied projects\n\n        Args:\n            project_list: list of projects to check requirements for.\n        \"\"\"\n        self._check_projects_planned(project_list)\n        self._check_projects_have_project_cards(project_list)\n        self._check_projects_prerequisites(project_list)\n        self._check_projects_corequisites(project_list)\n        self._check_projects_conflicts(project_list)\n\n    def _check_projects_planned(self, project_names: Collection[str]) -> None:\n        \"\"\"Checks that a list of projects are in the scenario's planned projects.\"\"\"\n        _missing_ps = [p for p in self._planned_projects if p not in self._planned_projects]\n        if _missing_ps:\n            raise ValueError(\n                f\"Projects are not in planned projects: \\n {_missing_ps}. Add them by \\\n                using add_project_cards(), add_projects_from_files(), or \\\n                add_projects_from_directory().\"\n            )\n\n    def _check_projects_have_project_cards(self, project_list: Collection[str]) -> bool:\n        \"\"\"Checks that a list of projects has an associated project card in the scenario.\"\"\"\n        _missing = [p for p in project_list if p not in self.project_cards]\n        if _missing:\n            WranglerLogger.error(\n                f\"Projects referenced which are missing project cards: {_missing}\"\n            )\n            return False\n        return True\n\n    def _check_projects_prerequisites(self, project_names: list[str]) -> None:\n        \"\"\"Check a list of projects' pre-requisites have been or will be applied to scenario.\"\"\"\n        if set(project_names).isdisjoint(set(self.prerequisites.keys())):\n            return\n        _prereqs = []\n        for p in project_names:\n            _prereqs += self.prerequisites.get(p, [])\n        _projects_applied = self.applied_projects + project_names\n        _missing = list(set(_prereqs) - set(_projects_applied))\n        if _missing:\n            WranglerLogger.debug(\n                f\"project_names: {project_names}\\nprojects_have_or_will_be_applied: \\\n                    {_projects_applied}\\nmissing: {_missing}\"\n            )\n            raise ScenarioPrerequisiteError(f\"Missing {len(_missing)} pre-requisites: {_missing}\")\n\n    def _check_projects_corequisites(self, project_names: list[str]) -> None:\n        \"\"\"Check a list of projects' co-requisites have been or will be applied to scenario.\"\"\"\n        if set(project_names).isdisjoint(set(self.corequisites.keys())):\n            return\n        _coreqs = []\n        for p in project_names:\n            _coreqs += self.corequisites.get(p, [])\n        _projects_applied = self.applied_projects + project_names\n        _missing = list(set(_coreqs) - set(_projects_applied))\n        if _missing:\n            WranglerLogger.debug(\n                f\"project_names: {project_names}\\nprojects_have_or_will_be_applied: \\\n                    {_projects_applied}\\nmissing: {_missing}\"\n            )\n            raise ScenarioCorequisiteError(f\"Missing {len(_missing)} corequisites: {_missing}\")\n\n    def _check_projects_conflicts(self, project_names: list[str]) -> None:\n        \"\"\"Checks that list of projects' conflicts have not been or will be applied to scenario.\"\"\"\n        # WranglerLogger.debug(\"Checking Conflicts...\")\n        projects_to_check = project_names + self.applied_projects\n        # WranglerLogger.debug(f\"\\nprojects_to_check:{projects_to_check}\\nprojects_with_conflicts:{set(self.conflicts.keys())}\")\n        if set(projects_to_check).isdisjoint(set(self.conflicts.keys())):\n            # WranglerLogger.debug(\"Projects have no conflicts to check\")\n            return\n        _conflicts = []\n        for p in project_names:\n            _conflicts += self.conflicts.get(p, [])\n        _conflict_problems = [p for p in _conflicts if p in projects_to_check]\n        if _conflict_problems:\n            WranglerLogger.warning(f\"Conflict Problems: \\n{_conflict_problems}\")\n            _conf_dict = {\n                k: v\n                for k, v in self.conflicts.items()\n                if k in projects_to_check and not set(v).isdisjoint(set(_conflict_problems))\n            }\n            WranglerLogger.debug(f\"Problematic Conflicts: \\n{_conf_dict}\")\n            raise ScenarioConflictError(f\"Found {len(_conflicts)} conflicts: {_conflict_problems}\")\n\n    def order_projects(self, project_list: Collection[str]) -> deque:\n        \"\"\"Orders a list of projects based on moving up pre-requisites into a deque.\n\n        Args:\n            project_list: list of projects to order\n\n        Returns: deque for applying projects.\n        \"\"\"\n        project_list = [p.lower() for p in project_list]\n        assert self._check_projects_have_project_cards(project_list)\n\n        # build prereq (adjacency) list for topological sort\n        adjacency_list = defaultdict(list)\n        visited_list = defaultdict(bool)\n\n        for project in project_list:\n            visited_list[project] = False\n            if not self.prerequisites.get(project):\n                continue\n            for prereq in self.prerequisites[project]:\n                # this will always be true, else would have been flagged in missing \\\n                # prerequsite check, but just in case\n                if prereq.lower() in project_list:\n                    adjacency_list[prereq.lower()] = [project]\n\n        # sorted_project_names is topological sorted project card names (based on prerequsiite)\n        _ordered_projects = topological_sort(\n            adjacency_list=adjacency_list, visited_list=visited_list\n        )\n\n        if not set(_ordered_projects) == set(project_list):\n            _missing = list(set(project_list) - set(_ordered_projects))\n            raise ValueError(f\"Project sort resulted in missing projects: {_missing}\")\n\n        project_deque = deque(_ordered_projects)\n\n        WranglerLogger.debug(f\"Ordered Projects: \\n{project_deque}\")\n\n        return project_deque\n\n    def apply_all_projects(self):\n        \"\"\"Applies all planned projects in the queue.\"\"\"\n        # Call this to make sure projects are appropriately queued in hidden variable.\n        self.queued_projects\n\n        # Use hidden variable.\n        while self._queued_projects:\n            self._apply_project(self._queued_projects.popleft())\n\n        # set this so it will trigger re-queuing any more projects.\n        self._queued_projects = None\n\n    def _apply_change(self, change: Union[ProjectCard, SubProject]) -> None:\n        \"\"\"Applies a specific change specified in a project card.\n\n        Change type must be in at least one of:\n        - ROADWAY_CATEGORIES\n        - TRANSIT_CATEGORIES\n\n        Args:\n            change: a project card or subproject card\n        \"\"\"\n        if change.change_type in ROADWAY_CARD_TYPES:\n            if not self.road_net:\n                raise ValueError(\"Missing Roadway Network\")\n            self.road_net.apply(change)\n        if change.change_type in TRANSIT_CARD_TYPES:\n            if not self.transit_net:\n                raise ValueError(\"Missing Transit Network\")\n            self.transit_net.apply(change)\n        if change.change_type in SECONDARY_TRANSIT_CARD_TYPES and self.transit_net:\n            self.transit_net.apply(change)\n\n        if change.change_type not in TRANSIT_CARD_TYPES + ROADWAY_CARD_TYPES:\n            raise ProjectCardError(\n                f\"Project {change.project}: Don't understand project cat: {change.change_type}\"\n            )\n\n    def _apply_project(self, project_name: str) -> None:\n        \"\"\"Applies project card to scenario.\n\n        If a list of changes is specified in referenced project card, iterates through each change.\n\n        Args:\n            project_name (str): name of project to be applied.\n        \"\"\"\n        project_name = project_name.lower()\n\n        WranglerLogger.info(f\"Applying {project_name}\")\n\n        p = self.project_cards[project_name]\n        WranglerLogger.debug(f\"types: {p.change_types}\")\n        WranglerLogger.debug(f\"type: {p.change_type}\")\n        if p.sub_projects:\n            for sp in p.sub_projects:\n                WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n                self._apply_change(sp)\n\n        else:\n            self._apply_change(p)\n\n        self._planned_projects.remove(project_name)\n        self.applied_projects.append(project_name)\n\n    def apply_projects(self, project_list: Collection[str]):\n        \"\"\"Applies a specific list of projects from the planned project queue.\n\n        Will order the list of projects based on pre-requisites.\n\n        NOTE: does not check co-requisites b/c that isn't possible when applying a sin\n\n        Args:\n            project_list: List of projects to be applied. All need to be in the planned project\n                queue.\n        \"\"\"\n        project_list = [p.lower() for p in project_list]\n\n        self._check_projects_requirements_satisfied(project_list)\n        ordered_project_queue = self.order_projects(project_list)\n\n        while ordered_project_queue:\n            self._apply_project(ordered_project_queue.popleft())\n\n        # Set so that when called again it will retrigger queueing from planned projects.\n        self._ordered_projects = None\n\n    def write(self, path: Union[Path, str], name: str) -> None:\n        \"\"\"_summary_.\n\n        Args:\n            path: Path to write scenario networks and scenario summary to.\n            name: Name to use.\n        \"\"\"\n        if self.road_net:\n            write_roadway(self.road_net, prefix=name, out_dir=path)\n        if self.transit_net:\n            write_transit(self.transit_net, prefix=name, out_dir=path)\n        self.summarize(outfile=os.path.join(path, name))\n\n    def summarize(self, project_detail: bool = True, outfile: str = \"\", mode: str = \"a\") -> str:\n        \"\"\"A high level summary of the created scenario.\n\n        Args:\n            project_detail: If True (default), will write out project card summaries.\n            outfile: If specified, will write scenario summary to text file.\n            mode: Outfile open mode. 'a' to append 'w' to overwrite.\n\n        Returns:\n            string of summary\n\n        \"\"\"\n        return scenario_summary(self, project_detail, outfile, mode)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.projects","title":"projects property","text":"

Returns a list of all projects in the scenario: applied and planned.

"},{"location":"api/#network_wrangler.scenario.Scenario.queued_projects","title":"queued_projects property","text":"

Returns a list version of _queued_projects queue.

Queued projects are thos that have been planned, have all pre-requisites satisfied, and have been ordered based on pre-requisites.

If no queued projects, will dynamically generate from planned projects based on pre-requisites and return the queue.

"},{"location":"api/#network_wrangler.scenario.Scenario.__init__","title":"__init__(base_scenario, project_card_list=[], name='')","text":"

Constructor.

Source code in network_wrangler/scenario.py
def __init__(\n    self,\n    base_scenario: Union[Scenario, dict],\n    project_card_list: list[ProjectCard] = [],\n    name=\"\",\n):\n    \"\"\"Constructor.\n\n    Args:\n    base_scenario: A base scenario object to base this isntance off of, or a dict which\n        describes the scenario attributes including applied projects and respective conflicts.\n        `{\"applied_projects\": [],\"conflicts\":{...}}`\n    project_card_list: Optional list of ProjectCard instances to add to planned projects.\n    name: Optional name for the scenario.\n    \"\"\"\n    WranglerLogger.info(\"Creating Scenario\")\n\n    if isinstance(base_scenario, Scenario):\n        base_scenario = base_scenario.__dict__\n\n    if not set(BASE_SCENARIO_SUGGESTED_PROPS) <= set(base_scenario.keys()):\n        WranglerLogger.warning(\n            f\"Base_scenario doesn't contain {BASE_SCENARIO_SUGGESTED_PROPS}\"\n        )\n\n    self.base_scenario = base_scenario\n    self.name = name\n    # if the base scenario had roadway or transit networks, use them as the basis.\n    self.road_net: Optional[RoadwayNetwork] = copy.deepcopy(self.base_scenario.get(\"road_net\"))\n    self.transit_net: Optional[TransitNetwork] = copy.deepcopy(\n        self.base_scenario.get(\"transit_net\")\n    )\n\n    self.project_cards: dict[str, ProjectCard] = {}\n    self._planned_projects: list[str] = []\n    self._queued_projects = None\n    self.applied_projects = self.base_scenario.get(\"applied_projects\", [])\n\n    self.prerequisites = self.base_scenario.get(\"prerequisites\", {})\n    self.corequisites = self.base_scenario.get(\"corequisites\", {})\n    self.conflicts = self.base_scenario.get(\"conflicts\", {})\n\n    for p in project_card_list:\n        self._add_project(p)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.__str__","title":"__str__()","text":"

String representation of the Scenario object.

Source code in network_wrangler/scenario.py
def __str__(self):\n    \"\"\"String representation of the Scenario object.\"\"\"\n    s = [\"{}: {}\".format(key, value) for key, value in self.__dict__.items()]\n    return \"\\n\".join(s)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.add_project_cards","title":"add_project_cards(project_card_list, validate=True, filter_tags=[])","text":"

Adds a list of ProjectCard instances to the Scenario.

Checks that a project of same name is not already in scenario. If selected, will validate ProjectCard before adding. If provided, will only add ProjectCard if it matches at least one filter_tags.

Parameters:

Name Type Description Default project_card_list Collection[ProjectCard]

List of ProjectCard instances to add to scenario.

required validate bool

If True, will require each ProjectCard is validated before being added to scenario. Defaults to True.

True filter_tags Collection[str]

If used, will filter ProjectCard instances and only add those whose tags match one or more of these filter_tags. Defaults to [] - which means no tag-filtering will occur.

[] Source code in network_wrangler/scenario.py
def add_project_cards(\n    self,\n    project_card_list: Collection[ProjectCard],\n    validate: bool = True,\n    filter_tags: Collection[str] = [],\n) -> None:\n    \"\"\"Adds a list of ProjectCard instances to the Scenario.\n\n    Checks that a project of same name is not already in scenario.\n    If selected, will validate ProjectCard before adding.\n    If provided, will only add ProjectCard if it matches at least one filter_tags.\n\n    Args:\n        project_card_list (Collection[ProjectCard]): List of ProjectCard instances to add to\n            scenario.\n        validate (bool, optional): If True, will require each ProjectCard is validated before\n            being added to scenario. Defaults to True.\n        filter_tags (Collection[str], optional): If used, will filter ProjectCard instances\n            and only add those whose tags match one or more of these filter_tags.\n            Defaults to [] - which means no tag-filtering will occur.\n    \"\"\"\n    for p in project_card_list:\n        self._add_project(p, validate=validate, filter_tags=filter_tags)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.apply_all_projects","title":"apply_all_projects()","text":"

Applies all planned projects in the queue.

Source code in network_wrangler/scenario.py
def apply_all_projects(self):\n    \"\"\"Applies all planned projects in the queue.\"\"\"\n    # Call this to make sure projects are appropriately queued in hidden variable.\n    self.queued_projects\n\n    # Use hidden variable.\n    while self._queued_projects:\n        self._apply_project(self._queued_projects.popleft())\n\n    # set this so it will trigger re-queuing any more projects.\n    self._queued_projects = None\n
"},{"location":"api/#network_wrangler.scenario.Scenario.apply_projects","title":"apply_projects(project_list)","text":"

Applies a specific list of projects from the planned project queue.

Will order the list of projects based on pre-requisites.

NOTE: does not check co-requisites b/c that isn\u2019t possible when applying a sin

Parameters:

Name Type Description Default project_list Collection[str]

List of projects to be applied. All need to be in the planned project queue.

required Source code in network_wrangler/scenario.py
def apply_projects(self, project_list: Collection[str]):\n    \"\"\"Applies a specific list of projects from the planned project queue.\n\n    Will order the list of projects based on pre-requisites.\n\n    NOTE: does not check co-requisites b/c that isn't possible when applying a sin\n\n    Args:\n        project_list: List of projects to be applied. All need to be in the planned project\n            queue.\n    \"\"\"\n    project_list = [p.lower() for p in project_list]\n\n    self._check_projects_requirements_satisfied(project_list)\n    ordered_project_queue = self.order_projects(project_list)\n\n    while ordered_project_queue:\n        self._apply_project(ordered_project_queue.popleft())\n\n    # Set so that when called again it will retrigger queueing from planned projects.\n    self._ordered_projects = None\n
"},{"location":"api/#network_wrangler.scenario.Scenario.order_projects","title":"order_projects(project_list)","text":"

Orders a list of projects based on moving up pre-requisites into a deque.

Parameters:

Name Type Description Default project_list Collection[str]

list of projects to order

required Source code in network_wrangler/scenario.py
def order_projects(self, project_list: Collection[str]) -> deque:\n    \"\"\"Orders a list of projects based on moving up pre-requisites into a deque.\n\n    Args:\n        project_list: list of projects to order\n\n    Returns: deque for applying projects.\n    \"\"\"\n    project_list = [p.lower() for p in project_list]\n    assert self._check_projects_have_project_cards(project_list)\n\n    # build prereq (adjacency) list for topological sort\n    adjacency_list = defaultdict(list)\n    visited_list = defaultdict(bool)\n\n    for project in project_list:\n        visited_list[project] = False\n        if not self.prerequisites.get(project):\n            continue\n        for prereq in self.prerequisites[project]:\n            # this will always be true, else would have been flagged in missing \\\n            # prerequsite check, but just in case\n            if prereq.lower() in project_list:\n                adjacency_list[prereq.lower()] = [project]\n\n    # sorted_project_names is topological sorted project card names (based on prerequsiite)\n    _ordered_projects = topological_sort(\n        adjacency_list=adjacency_list, visited_list=visited_list\n    )\n\n    if not set(_ordered_projects) == set(project_list):\n        _missing = list(set(project_list) - set(_ordered_projects))\n        raise ValueError(f\"Project sort resulted in missing projects: {_missing}\")\n\n    project_deque = deque(_ordered_projects)\n\n    WranglerLogger.debug(f\"Ordered Projects: \\n{project_deque}\")\n\n    return project_deque\n
"},{"location":"api/#network_wrangler.scenario.Scenario.summarize","title":"summarize(project_detail=True, outfile='', mode='a')","text":"

A high level summary of the created scenario.

Parameters:

Name Type Description Default project_detail bool

If True (default), will write out project card summaries.

True outfile str

If specified, will write scenario summary to text file.

'' mode str

Outfile open mode. \u2018a\u2019 to append \u2018w\u2019 to overwrite.

'a'

Returns:

Type Description str

string of summary

Source code in network_wrangler/scenario.py
def summarize(self, project_detail: bool = True, outfile: str = \"\", mode: str = \"a\") -> str:\n    \"\"\"A high level summary of the created scenario.\n\n    Args:\n        project_detail: If True (default), will write out project card summaries.\n        outfile: If specified, will write scenario summary to text file.\n        mode: Outfile open mode. 'a' to append 'w' to overwrite.\n\n    Returns:\n        string of summary\n\n    \"\"\"\n    return scenario_summary(self, project_detail, outfile, mode)\n
"},{"location":"api/#network_wrangler.scenario.Scenario.write","title":"write(path, name)","text":"

summary.

Parameters:

Name Type Description Default path Union[Path, str]

Path to write scenario networks and scenario summary to.

required name str

Name to use.

required Source code in network_wrangler/scenario.py
def write(self, path: Union[Path, str], name: str) -> None:\n    \"\"\"_summary_.\n\n    Args:\n        path: Path to write scenario networks and scenario summary to.\n        name: Name to use.\n    \"\"\"\n    if self.road_net:\n        write_roadway(self.road_net, prefix=name, out_dir=path)\n    if self.transit_net:\n        write_transit(self.transit_net, prefix=name, out_dir=path)\n    self.summarize(outfile=os.path.join(path, name))\n
"},{"location":"api/#network_wrangler.scenario.ScenarioConflictError","title":"ScenarioConflictError","text":"

Bases: Exception

Raised when a conflict is detected.

Source code in network_wrangler/scenario.py
class ScenarioConflictError(Exception):\n    \"\"\"Raised when a conflict is detected.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.scenario.ScenarioCorequisiteError","title":"ScenarioCorequisiteError","text":"

Bases: Exception

Raised when a co-requisite is not satisfied.

Source code in network_wrangler/scenario.py
class ScenarioCorequisiteError(Exception):\n    \"\"\"Raised when a co-requisite is not satisfied.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.scenario.ScenarioPrerequisiteError","title":"ScenarioPrerequisiteError","text":"

Bases: Exception

Raised when a pre-requisite is not satisfied.

Source code in network_wrangler/scenario.py
class ScenarioPrerequisiteError(Exception):\n    \"\"\"Raised when a pre-requisite is not satisfied.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.scenario.create_base_scenario","title":"create_base_scenario(base_shape_name, base_link_name, base_node_name, roadway_dir='', transit_dir='')","text":"

Creates a base scenario dictionary from roadway and transit network files.

Parameters:

Name Type Description Default base_shape_name str

filename of the base network shape

required base_link_name str

filename of the base network link

required base_node_name str

filename of the base network node

required roadway_dir str

optional path to the base scenario roadway network files

'' transit_dir str

optional path to base scenario transit files

'' Source code in network_wrangler/scenario.py
def create_base_scenario(\n    base_shape_name: str,\n    base_link_name: str,\n    base_node_name: str,\n    roadway_dir: str = \"\",\n    transit_dir: str = \"\",\n) -> dict:\n    \"\"\"Creates a base scenario dictionary from roadway and transit network files.\n\n    Args:\n        base_shape_name: filename of the base network shape\n        base_link_name: filename of the base network link\n        base_node_name: filename of the base network node\n        roadway_dir: optional path to the base scenario roadway network files\n        transit_dir: optional path to base scenario transit files\n    \"\"\"\n    if roadway_dir:\n        base_network_shape_file = os.path.join(roadway_dir, base_shape_name)\n        base_network_link_file = os.path.join(roadway_dir, base_link_name)\n        base_network_node_file = os.path.join(roadway_dir, base_node_name)\n    else:\n        base_network_shape_file = base_shape_name\n        base_network_link_file = base_link_name\n        base_network_node_file = base_node_name\n\n    road_net = load_roadway(\n        links_file=base_network_link_file,\n        nodes_file=base_network_node_file,\n        shapes_file=base_network_shape_file,\n    )\n\n    if transit_dir:\n        transit_net = load_transit(transit_dir)\n        transit_net.road_net = road_net\n    else:\n        transit_net = None\n        WranglerLogger.info(\n            \"No transit directory specified, base scenario will have empty transit network.\"\n        )\n\n    base_scenario = {\"road_net\": road_net, \"transit_net\": transit_net}\n\n    return base_scenario\n
"},{"location":"api/#network_wrangler.scenario.create_scenario","title":"create_scenario(base_scenario={}, project_card_list=[], project_card_filepath=None, filter_tags=[], validate=True)","text":"

Creates scenario from a base scenario and adds project cards.

Project cards can be added using any/all of the following methods: 1. List of ProjectCard instances 2. List of ProjectCard files 3. Directory and optional glob search to find project card files in

Checks that a project of same name is not already in scenario. If selected, will validate ProjectCard before adding. If provided, will only add ProjectCard if it matches at least one filter_tags.

Parameters:

Name Type Description Default base_scenario Union[Scenario, dict]

base Scenario scenario instances of dictionary of attributes.

{} project_card_list

List of ProjectCard instances to create Scenario from. Defaults to [].

[] project_card_filepath Optional[Union[Collection[str], str]]

where the project card is. A single path, list of paths,

None filter_tags Collection[str]

If used, will only add the project card if its tags match one or more of these filter_tags. Defaults to [] which means no tag-filtering will occur.

[] validate bool

If True, will validate the projectcard before being adding it to the scenario. Defaults to True.

True Source code in network_wrangler/scenario.py
def create_scenario(\n    base_scenario: Union[Scenario, dict] = {},\n    project_card_list=[],\n    project_card_filepath: Optional[Union[Collection[str], str]] = None,\n    filter_tags: Collection[str] = [],\n    validate=True,\n) -> Scenario:\n    \"\"\"Creates scenario from a base scenario and adds project cards.\n\n    Project cards can be added using any/all of the following methods:\n    1. List of ProjectCard instances\n    2. List of ProjectCard files\n    3. Directory and optional glob search to find project card files in\n\n    Checks that a project of same name is not already in scenario.\n    If selected, will validate ProjectCard before adding.\n    If provided, will only add ProjectCard if it matches at least one filter_tags.\n\n    Args:\n        base_scenario: base Scenario scenario instances of dictionary of attributes.\n        project_card_list: List of ProjectCard instances to create Scenario from. Defaults\n            to [].\n        project_card_filepath: where the project card is.  A single path, list of paths,\n        a directory, or a glob pattern. Defaults to None.\n        filter_tags (Collection[str], optional): If used, will only add the project card if\n            its tags match one or more of these filter_tags. Defaults to []\n            which means no tag-filtering will occur.\n        validate (bool, optional): If True, will validate the projectcard before\n            being adding it to the scenario. Defaults to True.\n    \"\"\"\n    scenario = Scenario(base_scenario)\n\n    if project_card_filepath:\n        project_card_list += list(\n            read_cards(project_card_filepath, filter_tags=filter_tags).values()\n        )\n\n    if project_card_list:\n        scenario.add_project_cards(project_card_list, filter_tags=filter_tags, validate=validate)\n\n    return scenario\n
"},{"location":"api/#network_wrangler.scenario.scenario_summary","title":"scenario_summary(scenario, project_detail=True, outfile='', mode='a')","text":"

A high level summary of the created scenario.

Parameters:

Name Type Description Default scenario Scenario

Scenario instance to summarize.

required project_detail bool

If True (default), will write out project card summaries.

True outfile str

If specified, will write scenario summary to text file.

'' mode str

Outfile open mode. \u2018a\u2019 to append \u2018w\u2019 to overwrite.

'a'

Returns:

Type Description str

string of summary

Source code in network_wrangler/scenario.py
def scenario_summary(\n    scenario: Scenario, project_detail: bool = True, outfile: str = \"\", mode: str = \"a\"\n) -> str:\n    \"\"\"A high level summary of the created scenario.\n\n    Args:\n        scenario: Scenario instance to summarize.\n        project_detail: If True (default), will write out project card summaries.\n        outfile: If specified, will write scenario summary to text file.\n        mode: Outfile open mode. 'a' to append 'w' to overwrite.\n\n    Returns:\n        string of summary\n    \"\"\"\n    WranglerLogger.info(f\"Summarizing Scenario {scenario.name}\")\n    report_str = \"------------------------------\\n\"\n    report_str += f\"Scenario created on {datetime.now()}\\n\"\n\n    report_str += \"Base Scenario:\\n\"\n    report_str += \"--Road Network:\\n\"\n    report_str += f\"----Link File: {scenario.base_scenario['road_net']._links_file}\\n\"\n    report_str += f\"----Node File: {scenario.base_scenario['road_net']._nodes_file}\\n\"\n    report_str += f\"----Shape File: {scenario.base_scenario['road_net']._shapes_file}\\n\"\n    report_str += \"--Transit Network:\\n\"\n    report_str += f\"----Feed Path: {scenario.base_scenario['transit_net'].feed.feed_path}\\n\"\n\n    report_str += \"\\nProject Cards:\\n -\"\n    report_str += \"\\n-\".join([str(pc.file) for p, pc in scenario.project_cards.items()])\n\n    report_str += \"\\nApplied Projects:\\n-\"\n    report_str += \"\\n-\".join(scenario.applied_projects)\n\n    if project_detail:\n        report_str += \"\\n---Project Card Details---\\n\"\n        for p in scenario.project_cards:\n            report_str += \"\\n{}\".format(\n                pprint.pformat(\n                    [scenario.project_cards[p].__dict__ for p in scenario.applied_projects]\n                )\n            )\n\n    if outfile:\n        with open(outfile, mode) as f:\n            f.write(report_str)\n        WranglerLogger.info(f\"Wrote Scenario Report to: {outfile}\")\n\n    return report_str\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork","title":"RoadwayNetwork","text":"

Bases: BaseModel

Representation of a Roadway Network.

Typical usage example:

net = load_roadway(\n    links_file=MY_LINK_FILE,\n    nodes_file=MY_NODE_FILE,\n    shapes_file=MY_SHAPE_FILE,\n)\nmy_selection = {\n    \"link\": [{\"name\": [\"I 35E\"]}],\n    \"A\": {\"osm_node_id\": \"961117623\"},  # start searching for segments at A\n    \"B\": {\"osm_node_id\": \"2564047368\"},\n}\nnet.get_selection(my_selection)\n\nmy_change = [\n    {\n        'property': 'lanes',\n        'existing': 1,\n        'set': 2,\n    },\n    {\n        'property': 'drive_access',\n        'set': 0,\n    },\n]\n\nmy_net.apply_roadway_feature_change(\n    my_net.get_selection(my_selection),\n    my_change\n)\n\n    net.model_net\n    net.is_network_connected(mode=\"drive\", nodes=self.m_nodes_df, links=self.m_links_df)\n    _, disconnected_nodes = net.assess_connectivity(\n        mode=\"walk\",\n        ignore_end_nodes=True,\n        nodes=self.m_nodes_df,\n        links=self.m_links_df\n    )\n    write_roadway(net,filename=my_out_prefix, path=my_dir, for_model = True)\n

Attributes:

Name Type Description nodes_df RoadNodesTable

dataframe of of node records.

links_df RoadLinksTable

dataframe of link records and associated properties.

shapes_df RoadShapestable

data from of detailed shape records This is lazily created iff it is called because shapes files can be expensive to read.

selections dict

dictionary of stored roadway selection objects, mapped by RoadwayLinkSelection.sel_key or RoadwayNodeSelection.sel_key in case they are made repeatedly.

crs str

coordinate reference system in ESPG number format. Defaults to DEFAULT_CRS which is set to 4326, WGS 84 Lat/Long

network_hash str

dynamic property of the hashed value of links_df and nodes_df. Used for quickly identifying if a network has changed since various expensive operations have taken place (i.e. generating a ModelRoadwayNetwork or a network graph)

model_net ModelRoadwayNetwork

referenced ModelRoadwayNetwork object which will be lazily created if None or if the network_hash has changed.

Source code in network_wrangler/roadway/network.py
class RoadwayNetwork(BaseModel):\n    \"\"\"Representation of a Roadway Network.\n\n    Typical usage example:\n\n    ```py\n    net = load_roadway(\n        links_file=MY_LINK_FILE,\n        nodes_file=MY_NODE_FILE,\n        shapes_file=MY_SHAPE_FILE,\n    )\n    my_selection = {\n        \"link\": [{\"name\": [\"I 35E\"]}],\n        \"A\": {\"osm_node_id\": \"961117623\"},  # start searching for segments at A\n        \"B\": {\"osm_node_id\": \"2564047368\"},\n    }\n    net.get_selection(my_selection)\n\n    my_change = [\n        {\n            'property': 'lanes',\n            'existing': 1,\n            'set': 2,\n        },\n        {\n            'property': 'drive_access',\n            'set': 0,\n        },\n    ]\n\n    my_net.apply_roadway_feature_change(\n        my_net.get_selection(my_selection),\n        my_change\n    )\n\n        net.model_net\n        net.is_network_connected(mode=\"drive\", nodes=self.m_nodes_df, links=self.m_links_df)\n        _, disconnected_nodes = net.assess_connectivity(\n            mode=\"walk\",\n            ignore_end_nodes=True,\n            nodes=self.m_nodes_df,\n            links=self.m_links_df\n        )\n        write_roadway(net,filename=my_out_prefix, path=my_dir, for_model = True)\n    ```\n\n    Attributes:\n        nodes_df (RoadNodesTable): dataframe of of node records.\n        links_df (RoadLinksTable): dataframe of link records and associated properties.\n        shapes_df (RoadShapestable): data from of detailed shape records  This is lazily\n            created iff it is called because shapes files can be expensive to read.\n        selections (dict): dictionary of stored roadway selection objects, mapped by\n            `RoadwayLinkSelection.sel_key` or `RoadwayNodeSelection.sel_key` in case they are\n                made repeatedly.\n        crs (str): coordinate reference system in ESPG number format. Defaults to DEFAULT_CRS\n            which is set to 4326, WGS 84 Lat/Long\n        network_hash: dynamic property of the hashed value of links_df and nodes_df. Used for\n            quickly identifying if a network has changed since various expensive operations have\n            taken place (i.e. generating a ModelRoadwayNetwork or a network graph)\n        model_net (ModelRoadwayNetwork): referenced `ModelRoadwayNetwork` object which will be\n            lazily created if None or if the `network_hash` has changed.\n    \"\"\"\n\n    crs: Literal[LAT_LON_CRS] = LAT_LON_CRS\n    nodes_df: DataFrame[RoadNodesTable]\n    links_df: DataFrame[RoadLinksTable]\n    _shapes_df: Optional[DataFrame[RoadShapesTable]] = None\n\n    _links_file: Optional[Path] = None\n    _nodes_file: Optional[Path] = None\n    _shapes_file: Optional[Path] = None\n\n    _shapes_params: ShapesParams = ShapesParams()\n    _model_net: Optional[ModelRoadwayNetwork] = None\n    _selections: dict[str, Selections] = {}\n    _modal_graphs: dict[str, dict] = defaultdict(lambda: {\"graph\": None, \"hash\": None})\n\n    @field_validator(\"nodes_df\", \"links_df\")\n    def coerce_crs(cls, v, info):\n        \"\"\"Coerce crs of nodes_df and links_df to network crs.\"\"\"\n        net_crs = info.data[\"crs\"]\n        if v.crs != net_crs:\n            WranglerLogger.warning(\n                f\"CRS of links_df ({v.crs}) doesn't match network crs {net_crs}. \\\n                    Changing to network crs.\"\n            )\n            v.to_crs(net_crs)\n        return v\n\n    @property\n    def shapes_df(self) -> DataFrame[RoadShapesTable]:\n        \"\"\"Load and return RoadShapesTable.\n\n        If not already loaded, will read from shapes_file and return. If shapes_file is None,\n        will return an empty dataframe with the right schema. If shapes_df is already set, will\n        return that.\n        \"\"\"\n        if (self._shapes_df is None or self._shapes_df.empty) and self._shapes_file is not None:\n            self._shapes_df = read_shapes(\n                self._shapes_file,\n                in_crs=self.crs,\n                shapes_params=self._shapes_params,\n            )\n        # if there is NONE, then at least create an empty dataframe with right schema\n        elif self._shapes_df is None:\n            self._shapes_df = empty_df_from_datamodel(RoadShapesTable, crs=self.crs)\n            self._shapes_df.set_index(\"shape_id_idx\", inplace=True)\n\n        return self._shapes_df\n\n    @shapes_df.setter\n    def shapes_df(self, value):\n        self._shapes_df = df_to_shapes_df(value, shapes_params=self._shapes_params)\n\n    @property\n    def network_hash(self) -> str:\n        \"\"\"Hash of the links and nodes dataframes.\"\"\"\n        _value = str.encode(self.links_df.df_hash() + \"-\" + self.nodes_df.df_hash())\n\n        _hash = hashlib.sha256(_value).hexdigest()\n        return _hash\n\n    @property\n    def model_net(self) -> ModelRoadwayNetwork:\n        \"\"\"Return a ModelRoadwayNetwork object for this network.\"\"\"\n        if self._model_net is None or self._model_net._net_hash != self.network_hash:\n            self._model_net = ModelRoadwayNetwork(self)\n        return self._model_net\n\n    @property\n    def summary(self) -> dict:\n        \"\"\"Quick summary dictionary of number of links, nodes.\"\"\"\n        d = {\n            \"links\": len(self.links_df),\n            \"nodes\": len(self.nodes_df),\n        }\n        return d\n\n    @property\n    def link_shapes_df(self) -> gpd.GeoDataFrame:\n        \"\"\"Add shape geometry to links if available.\n\n        returns: shapes merged to nodes dataframe\n        \"\"\"\n        _links_df = copy.deepcopy(self.links_df)\n        link_shapes_df = _links_df.merge(\n            self.shapes_df,\n            left_on=self.links_df.params.fk_to_shape,\n            right_on=self.shapes_df.params.primary_key,\n            how=\"left\",\n        )\n        return link_shapes_df\n\n    def get_property_by_timespan_and_group(\n        self,\n        link_property: str,\n        category: Union[str, int] = DEFAULT_CATEGORY,\n        timespan: TimespanString = DEFAULT_TIMESPAN,\n        strict_timespan_match: bool = False,\n        min_overlap_minutes: int = 60,\n    ) -> Any:\n        \"\"\"Returns a new dataframe with model_link_id and link property by category and timespan.\n\n        Convenience method for backward compatability.\n\n        Args:\n            link_property: link property to query\n            category: category to query or a list of categories. Defaults to DEFAULT_CATEGORY.\n            timespan: timespan to query in the form of [\"HH:MM\",\"HH:MM\"].\n                Defaults to DEFAULT_TIMESPAN.\n            strict_timespan_match: If True, will only return links that match the timespan exactly.\n                Defaults to False.\n            min_overlap_minutes: If strict_timespan_match is False, will return links that overlap\n                with the timespan by at least this many minutes. Defaults to 60.\n        \"\"\"\n        from .links.scopes import prop_for_scope\n\n        return prop_for_scope(\n            self.links_df,\n            link_property,\n            timespan=timespan,\n            category=category,\n            strict_timespan_match=strict_timespan_match,\n            min_overlap_minutes=min_overlap_minutes,\n        )\n\n    def get_selection(\n        self,\n        selection_dict: Union[dict, SelectFacility],\n        overwrite: bool = False,\n    ) -> Union[RoadwayNodeSelection, RoadwayLinkSelection]:\n        \"\"\"Return selection if it already exists, otherwise performs selection.\n\n        Args:\n            selection_dict (dict): SelectFacility dictionary.\n            overwrite: if True, will overwrite any previously cached searches. Defaults to False.\n        \"\"\"\n        key = _create_selection_key(selection_dict)\n        if (key in self._selections) and not overwrite:\n            WranglerLogger.debug(f\"Using cached selection from key: {key}\")\n            return self._selections[key]\n\n        if isinstance(selection_dict, SelectFacility):\n            selection_data = selection_dict\n        elif isinstance(selection_dict, SelectLinksDict):\n            selection_data = SelectFacility(links=selection_dict)\n        elif isinstance(selection_dict, SelectNodesDict):\n            selection_data = SelectFacility(nodes=selection_dict)\n        elif isinstance(selection_dict, dict):\n            selection_data = SelectFacility(**selection_dict)\n        else:\n            WranglerLogger.error(f\"`selection_dict` arg must be a dictionary or SelectFacility model.\\\n                             Received: {selection_dict} of type {type(selection_dict)}\")\n            raise SelectionError(\"selection_dict arg must be a dictionary or SelectFacility model\")\n\n        WranglerLogger.debug(f\"Getting selection from key: {key}\")\n        if selection_data.feature_types in [\"links\", \"segment\"]:\n            return RoadwayLinkSelection(self, selection_dict)\n        elif selection_data.feature_types == \"nodes\":\n            return RoadwayNodeSelection(self, selection_dict)\n        else:\n            WranglerLogger.error(\"Selection data should be of type 'segment', 'links' or 'nodes'.\")\n            raise SelectionError(\"Selection data should be of type 'segment', 'links' or 'nodes'.\")\n\n    def modal_graph_hash(self, mode) -> str:\n        \"\"\"Hash of the links in order to detect a network change from when graph created.\"\"\"\n        _value = str.encode(self.links_df.df_hash() + \"-\" + mode)\n        _hash = hashlib.sha256(_value).hexdigest()\n\n        return _hash\n\n    def get_modal_graph(self, mode) -> MultiDiGraph:\n        \"\"\"Return a networkx graph of the network for a specific mode.\n\n        Args:\n            mode: mode of the network, one of `drive`,`transit`,`walk`, `bike`\n        \"\"\"\n        from .graph import net_to_graph\n\n        if self._modal_graphs[mode][\"hash\"] != self.modal_graph_hash(mode):\n            self._modal_graphs[mode][\"graph\"] = net_to_graph(self, mode)\n\n        return self._modal_graphs[mode][\"graph\"]\n\n    def apply(self, project_card: Union[ProjectCard, dict]) -> RoadwayNetwork:\n        \"\"\"Wrapper method to apply a roadway project, returning a new RoadwayNetwork instance.\n\n        Args:\n            project_card: either a dictionary of the project card object or ProjectCard instance\n        \"\"\"\n        if not (isinstance(project_card, ProjectCard) or isinstance(project_card, SubProject)):\n            project_card = ProjectCard(project_card)\n\n        project_card.validate()\n\n        if project_card.sub_projects:\n            for sp in project_card.sub_projects:\n                WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n                self._apply_change(sp)\n            return self\n        else:\n            return self._apply_change(project_card)\n\n    def _apply_change(self, change: Union[ProjectCard, SubProject]) -> RoadwayNetwork:\n        \"\"\"Apply a single change: a single-project project or a sub-project.\"\"\"\n        if not isinstance(change, SubProject):\n            WranglerLogger.info(f\"Applying Project to Roadway Network: {change.project}\")\n\n        if change.change_type == \"roadway_property_change\":\n            return apply_roadway_property_change(\n                self,\n                self.get_selection(change.facility),\n                change.roadway_property_change[\"property_changes\"],\n            )\n\n        elif change.change_type == \"roadway_addition\":\n            return apply_new_roadway(\n                self,\n                change.roadway_addition,\n            )\n\n        elif change.change_type == \"roadway_deletion\":\n            return apply_roadway_deletion(\n                self,\n                change.roadway_deletion,\n            )\n\n        elif change.change_type == \"pycode\":\n            return apply_calculated_roadway(self, change.pycode)\n        else:\n            WranglerLogger.error(f\"Couldn't find project in: \\n{change.__dict__}\")\n            raise (ValueError(f\"Invalid Project Card Category: {change.change_type}\"))\n\n    def links_with_link_ids(self, link_ids: List[int]) -> DataFrame[RoadLinksTable]:\n        \"\"\"Return subset of links_df based on link_ids list.\"\"\"\n        return filter_links_to_ids(self.links_df, link_ids)\n\n    def links_with_nodes(self, node_ids: List[int]) -> DataFrame[RoadLinksTable]:\n        \"\"\"Return subset of links_df based on node_ids list.\"\"\"\n        return filter_links_to_node_ids(self.links_df, node_ids)\n\n    def nodes_in_links(self) -> DataFrame[RoadNodesTable]:\n        \"\"\"Returns subset of self.nodes_df that are in self.links_df.\"\"\"\n        return filter_nodes_to_links(self.links_df, self.nodes_df)\n\n    def add_links(self, add_links_df: Union[pd.DataFrame, DataFrame[RoadLinksTable]]):\n        \"\"\"Validate combined links_df with LinksSchema before adding to self.links_df.\n\n        Args:\n            add_links_df: Dataframe of additional links to add.\n        \"\"\"\n        if not isinstance(add_links_df, RoadLinksTable):\n            add_links_df = data_to_links_df(add_links_df, nodes_df=self.nodes_df)\n        self.links_df = RoadLinksTable(pd.concat([self.links_df, add_links_df]))\n\n    def add_nodes(self, add_nodes_df: Union[pd.DataFrame, DataFrame[RoadNodesTable]]):\n        \"\"\"Validate combined nodes_df with NodesSchema before adding to self.nodes_df.\n\n        Args:\n            add_nodes_df: Dataframe of additional nodes to add.\n        \"\"\"\n        if not isinstance(add_nodes_df, RoadNodesTable):\n            add_nodes_df = data_to_nodes_df(add_nodes_df)\n        self.nodes_df = RoadNodesTable(pd.concat([self.nodes_df, add_nodes_df]))\n\n    def add_shapes(self, add_shapes_df: Union[pd.DataFrame, DataFrame[RoadShapesTable]]):\n        \"\"\"Validate combined shapes_df with RoadShapesTable efore adding to self.shapes_df.\n\n        Args:\n            add_shapes_df: Dataframe of additional shapes to add.\n        \"\"\"\n        if not isinstance(add_shapes_df, RoadShapesTable):\n            add_shapes_df = df_to_shapes_df(add_shapes_df)\n        WranglerLogger.debug(f\"add_shapes_df: \\n{add_shapes_df}\")\n        WranglerLogger.debug(f\"self.shapes_df: \\n{self.shapes_df}\")\n        together_df = pd.concat([self.shapes_df, add_shapes_df])\n        WranglerLogger.debug(f\"together_df: \\n{together_df}\")\n        self.shapes_df = RoadShapesTable(pd.concat([self.shapes_df, add_shapes_df]))\n\n    def delete_links(\n        self,\n        selection_dict: SelectLinksDict,\n        clean_nodes: bool = False,\n        clean_shapes: bool = False,\n    ):\n        \"\"\"Deletes links based on selection dictionary and optionally associated nodes and shapes.\n\n        Args:\n            selection_dict (SelectLinks): Dictionary describing link selections as follows:\n                `all`: Optional[bool] = False. If true, will select all.\n                `name`: Optional[list[str]]\n                `ref`: Optional[list[str]]\n                `osm_link_id`:Optional[list[str]]\n                `model_link_id`: Optional[list[int]]\n                `modes`: Optional[list[str]]. Defaults to \"any\"\n                `ignore_missing`: if true, will not error when defaults to True.\n                ...plus any other link property to select on top of these.\n            clean_nodes (bool, optional): If True, will clean nodes uniquely associated with\n                deleted links. Defaults to False.\n            clean_shapes (bool, optional): If True, will clean nodes uniquely associated with\n                deleted links. Defaults to False.\n        \"\"\"\n        selection_dict = SelectLinksDict(**selection_dict).model_dump(\n            exclude_none=True, by_alias=True\n        )\n        selection = self.get_selection({\"links\": selection_dict})\n\n        if clean_nodes:\n            node_ids_to_delete = node_ids_unique_to_link_ids(\n                selection.selected_links, selection.selected_links_df, self.nodes_df\n            )\n            WranglerLogger.debug(\n                f\"Dropping nodes associated with dropped links: \\n{node_ids_to_delete}\"\n            )\n            self.nodes_df = delete_nodes_by_ids(self.nodes_df, del_node_ids=node_ids_to_delete)\n\n        if clean_shapes:\n            shape_ids_to_delete = shape_ids_unique_to_link_ids(\n                selection.selected_links, selection.selected_links_df, self.shapes_df\n            )\n            WranglerLogger.debug(\n                f\"Dropping shapes associated with dropped links: \\n{shape_ids_to_delete}\"\n            )\n            self.shapes_df = delete_shapes_by_ids(\n                self.shapes_df, del_shape_ids=shape_ids_to_delete\n            )\n\n        self.links_df = delete_links_by_ids(\n            self.links_df,\n            selection.selected_links,\n            ignore_missing=selection.ignore_missing,\n        )\n\n    def delete_nodes(\n        self,\n        selection_dict: Union[dict, SelectNodesDict],\n        remove_links: bool = False,\n    ) -> None:\n        \"\"\"Deletes nodes from roadway network. Wont delete nodes used by links in network.\n\n        Args:\n            selection_dict: dictionary of node selection criteria in the form of a SelectNodesDict.\n            remove_links: if True, will remove any links that are associated with the nodes.\n                If False, will only remove nodes if they are not associated with any links.\n                Defaults to False.\n\n        raises:\n            NodeDeletionError: If not ignore_missing and selected nodes to delete aren't in network\n        \"\"\"\n        if not isinstance(selection_dict, SelectNodesDict):\n            selection_dict = SelectNodesDict(**selection_dict)\n        selection_dict = selection_dict.model_dump(exclude_none=True, by_alias=True)\n        selection: RoadwayNodeSelection = self.get_selection(\n            {\"nodes\": selection_dict},\n        )\n        if remove_links:\n            del_node_ids = selection.selected_nodes\n            link_ids = self.links_with_nodes(selection.selected_nodes).model_link_id.to_list()\n            WranglerLogger.info(f\"Removing {len(link_ids)} links associated with nodes.\")\n            self.delete_links({\"model_link_id\": link_ids})\n        else:\n            unused_node_ids = node_ids_without_links(self.nodes_df, self.links_df)\n            del_node_ids = list(set(selection.selected_nodes).intersection(unused_node_ids))\n\n        self.nodes_df = delete_nodes_by_ids(\n            self.nodes_df, del_node_ids, ignore_missing=selection.ignore_missing\n        )\n\n    def clean_unused_shapes(self):\n        \"\"\"Removes any unused shapes from network that aren't referenced by links_df.\"\"\"\n        from .shapes.shapes import shape_ids_without_links\n\n        del_shape_ids = shape_ids_without_links(self.shapes_df, self.links_df)\n        self.shapes_df = self.shapes_df.drop(del_shape_ids)\n\n    def clean_unused_nodes(self):\n        \"\"\"Removes any unused nodes from network that aren't referenced by links_df.\n\n        NOTE: does not check if these nodes are used by transit, so use with caution.\n        \"\"\"\n        from .nodes.nodes import node_ids_without_links\n\n        node_ids = node_ids_without_links(self.nodes_df, self.links_df)\n        self.nodes_df = self.nodes_df.drop(node_ids)\n\n    def move_nodes(\n        self,\n        node_geometry_change_table: DataFrame[NodeGeometryChangeTable],\n    ):\n        \"\"\"Moves nodes based on updated geometry along with associated links and shape geometry.\n\n        Args:\n            node_geometry_change_table: a table with model_node_id, X, Y, and CRS.\n        \"\"\"\n        node_geometry_change_table = NodeGeometryChangeTable(node_geometry_change_table)\n        node_ids = node_geometry_change_table.model_node_id.to_list()\n        WranglerLogger.debug(f\"Moving nodes: {node_ids}\")\n        self.nodes_df = edit_node_geometry(self.nodes_df, node_geometry_change_table)\n        self.links_df = edit_link_geometry_from_nodes(self.links_df, self.nodes_df, node_ids)\n        self.shapes_df = edit_shape_geometry_from_nodes(\n            self.shapes_df, self.links_df, self.nodes_df, node_ids\n        )\n\n    def has_node(self, model_node_id: int) -> bool:\n        \"\"\"Queries if network has node based on model_node_id.\n\n        Args:\n            model_node_id: model_node_id to check for.\n        \"\"\"\n        has_node = self.nodes_df[self.nodes_df.model_node_id].isin([model_node_id]).any()\n\n        return has_node\n\n    def has_link(self, ab: tuple) -> bool:\n        \"\"\"Returns true if network has links with AB values.\n\n        Args:\n            ab: Tuple of values corresponding with A and B.\n        \"\"\"\n        sel_a, sel_b = ab\n        has_link = self.links_df[self.links_df[[\"A\", \"B\"]]].isin({\"A\": sel_a, \"B\": sel_b}).any()\n        return has_link\n\n    def is_connected(self, mode: str) -> bool:\n        \"\"\"Determines if the network graph is \"strongly\" connected.\n\n        A graph is strongly connected if each vertex is reachable from every other vertex.\n\n        Args:\n            mode:  mode of the network, one of `drive`,`transit`,`walk`, `bike`\n        \"\"\"\n        is_connected = nx.is_strongly_connected(self.get_modal_graph(mode))\n\n        return is_connected\n\n    @staticmethod\n    def add_incident_link_data_to_nodes(\n        links_df: Optional[DataFrame[RoadLinksTable]] = None,\n        nodes_df: Optional[DataFrame[RoadNodesTable]] = None,\n        link_variables: list = [],\n    ) -> DataFrame[RoadNodesTable]:\n        \"\"\"Add data from links going to/from nodes to node.\n\n        Args:\n            links_df: if specified, will assess connectivity of this\n                links list rather than self.links_df\n            nodes_df: if specified, will assess connectivity of this\n                nodes list rather than self.nodes_df\n            link_variables: list of columns in links dataframe to add to incident nodes\n\n        Returns:\n            nodes DataFrame with link data where length is N*number of links going in/out\n        \"\"\"\n        WranglerLogger.debug(\"Adding following link data to nodes: \".format())\n\n        _link_vals_to_nodes = [x for x in link_variables if x in links_df.columns]\n        if link_variables not in _link_vals_to_nodes:\n            WranglerLogger.warning(\n                \"Following columns not in links_df and wont be added to nodes: {} \".format(\n                    list(set(link_variables) - set(_link_vals_to_nodes))\n                )\n            )\n\n        _nodes_from_links_A = nodes_df.merge(\n            links_df[[links_df.params.from_node] + _link_vals_to_nodes],\n            how=\"outer\",\n            left_on=nodes_df.params.primary_key,\n            right_on=links_df.params.from_node,\n        )\n        _nodes_from_links_B = nodes_df.merge(\n            links_df[[links_df.params.to_node] + _link_vals_to_nodes],\n            how=\"outer\",\n            left_on=nodes_df.params.primary_key,\n            right_on=links_df.params.to_node,\n        )\n        _nodes_from_links_ab = pd.concat([_nodes_from_links_A, _nodes_from_links_B])\n\n        return _nodes_from_links_ab\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.link_shapes_df","title":"link_shapes_df: gpd.GeoDataFrame property","text":"

Add shape geometry to links if available.

returns: shapes merged to nodes dataframe

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.model_net","title":"model_net: ModelRoadwayNetwork property","text":"

Return a ModelRoadwayNetwork object for this network.

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.network_hash","title":"network_hash: str property","text":"

Hash of the links and nodes dataframes.

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.shapes_df","title":"shapes_df: DataFrame[RoadShapesTable] property writable","text":"

Load and return RoadShapesTable.

If not already loaded, will read from shapes_file and return. If shapes_file is None, will return an empty dataframe with the right schema. If shapes_df is already set, will return that.

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.summary","title":"summary: dict property","text":"

Quick summary dictionary of number of links, nodes.

"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.add_incident_link_data_to_nodes","title":"add_incident_link_data_to_nodes(links_df=None, nodes_df=None, link_variables=[]) staticmethod","text":"

Add data from links going to/from nodes to node.

Parameters:

Name Type Description Default links_df Optional[DataFrame[RoadLinksTable]]

if specified, will assess connectivity of this links list rather than self.links_df

None nodes_df Optional[DataFrame[RoadNodesTable]]

if specified, will assess connectivity of this nodes list rather than self.nodes_df

None link_variables list

list of columns in links dataframe to add to incident nodes

[]

Returns:

Type Description DataFrame[RoadNodesTable]

nodes DataFrame with link data where length is N*number of links going in/out

Source code in network_wrangler/roadway/network.py
@staticmethod\ndef add_incident_link_data_to_nodes(\n    links_df: Optional[DataFrame[RoadLinksTable]] = None,\n    nodes_df: Optional[DataFrame[RoadNodesTable]] = None,\n    link_variables: list = [],\n) -> DataFrame[RoadNodesTable]:\n    \"\"\"Add data from links going to/from nodes to node.\n\n    Args:\n        links_df: if specified, will assess connectivity of this\n            links list rather than self.links_df\n        nodes_df: if specified, will assess connectivity of this\n            nodes list rather than self.nodes_df\n        link_variables: list of columns in links dataframe to add to incident nodes\n\n    Returns:\n        nodes DataFrame with link data where length is N*number of links going in/out\n    \"\"\"\n    WranglerLogger.debug(\"Adding following link data to nodes: \".format())\n\n    _link_vals_to_nodes = [x for x in link_variables if x in links_df.columns]\n    if link_variables not in _link_vals_to_nodes:\n        WranglerLogger.warning(\n            \"Following columns not in links_df and wont be added to nodes: {} \".format(\n                list(set(link_variables) - set(_link_vals_to_nodes))\n            )\n        )\n\n    _nodes_from_links_A = nodes_df.merge(\n        links_df[[links_df.params.from_node] + _link_vals_to_nodes],\n        how=\"outer\",\n        left_on=nodes_df.params.primary_key,\n        right_on=links_df.params.from_node,\n    )\n    _nodes_from_links_B = nodes_df.merge(\n        links_df[[links_df.params.to_node] + _link_vals_to_nodes],\n        how=\"outer\",\n        left_on=nodes_df.params.primary_key,\n        right_on=links_df.params.to_node,\n    )\n    _nodes_from_links_ab = pd.concat([_nodes_from_links_A, _nodes_from_links_B])\n\n    return _nodes_from_links_ab\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.add_links","title":"add_links(add_links_df)","text":"

Validate combined links_df with LinksSchema before adding to self.links_df.

Parameters:

Name Type Description Default add_links_df Union[DataFrame, DataFrame[RoadLinksTable]]

Dataframe of additional links to add.

required Source code in network_wrangler/roadway/network.py
def add_links(self, add_links_df: Union[pd.DataFrame, DataFrame[RoadLinksTable]]):\n    \"\"\"Validate combined links_df with LinksSchema before adding to self.links_df.\n\n    Args:\n        add_links_df: Dataframe of additional links to add.\n    \"\"\"\n    if not isinstance(add_links_df, RoadLinksTable):\n        add_links_df = data_to_links_df(add_links_df, nodes_df=self.nodes_df)\n    self.links_df = RoadLinksTable(pd.concat([self.links_df, add_links_df]))\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.add_nodes","title":"add_nodes(add_nodes_df)","text":"

Validate combined nodes_df with NodesSchema before adding to self.nodes_df.

Parameters:

Name Type Description Default add_nodes_df Union[DataFrame, DataFrame[RoadNodesTable]]

Dataframe of additional nodes to add.

required Source code in network_wrangler/roadway/network.py
def add_nodes(self, add_nodes_df: Union[pd.DataFrame, DataFrame[RoadNodesTable]]):\n    \"\"\"Validate combined nodes_df with NodesSchema before adding to self.nodes_df.\n\n    Args:\n        add_nodes_df: Dataframe of additional nodes to add.\n    \"\"\"\n    if not isinstance(add_nodes_df, RoadNodesTable):\n        add_nodes_df = data_to_nodes_df(add_nodes_df)\n    self.nodes_df = RoadNodesTable(pd.concat([self.nodes_df, add_nodes_df]))\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.add_shapes","title":"add_shapes(add_shapes_df)","text":"

Validate combined shapes_df with RoadShapesTable efore adding to self.shapes_df.

Parameters:

Name Type Description Default add_shapes_df Union[DataFrame, DataFrame[RoadShapesTable]]

Dataframe of additional shapes to add.

required Source code in network_wrangler/roadway/network.py
def add_shapes(self, add_shapes_df: Union[pd.DataFrame, DataFrame[RoadShapesTable]]):\n    \"\"\"Validate combined shapes_df with RoadShapesTable efore adding to self.shapes_df.\n\n    Args:\n        add_shapes_df: Dataframe of additional shapes to add.\n    \"\"\"\n    if not isinstance(add_shapes_df, RoadShapesTable):\n        add_shapes_df = df_to_shapes_df(add_shapes_df)\n    WranglerLogger.debug(f\"add_shapes_df: \\n{add_shapes_df}\")\n    WranglerLogger.debug(f\"self.shapes_df: \\n{self.shapes_df}\")\n    together_df = pd.concat([self.shapes_df, add_shapes_df])\n    WranglerLogger.debug(f\"together_df: \\n{together_df}\")\n    self.shapes_df = RoadShapesTable(pd.concat([self.shapes_df, add_shapes_df]))\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.apply","title":"apply(project_card)","text":"

Wrapper method to apply a roadway project, returning a new RoadwayNetwork instance.

Parameters:

Name Type Description Default project_card Union[ProjectCard, dict]

either a dictionary of the project card object or ProjectCard instance

required Source code in network_wrangler/roadway/network.py
def apply(self, project_card: Union[ProjectCard, dict]) -> RoadwayNetwork:\n    \"\"\"Wrapper method to apply a roadway project, returning a new RoadwayNetwork instance.\n\n    Args:\n        project_card: either a dictionary of the project card object or ProjectCard instance\n    \"\"\"\n    if not (isinstance(project_card, ProjectCard) or isinstance(project_card, SubProject)):\n        project_card = ProjectCard(project_card)\n\n    project_card.validate()\n\n    if project_card.sub_projects:\n        for sp in project_card.sub_projects:\n            WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n            self._apply_change(sp)\n        return self\n    else:\n        return self._apply_change(project_card)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.clean_unused_nodes","title":"clean_unused_nodes()","text":"

Removes any unused nodes from network that aren\u2019t referenced by links_df.

NOTE: does not check if these nodes are used by transit, so use with caution.

Source code in network_wrangler/roadway/network.py
def clean_unused_nodes(self):\n    \"\"\"Removes any unused nodes from network that aren't referenced by links_df.\n\n    NOTE: does not check if these nodes are used by transit, so use with caution.\n    \"\"\"\n    from .nodes.nodes import node_ids_without_links\n\n    node_ids = node_ids_without_links(self.nodes_df, self.links_df)\n    self.nodes_df = self.nodes_df.drop(node_ids)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.clean_unused_shapes","title":"clean_unused_shapes()","text":"

Removes any unused shapes from network that aren\u2019t referenced by links_df.

Source code in network_wrangler/roadway/network.py
def clean_unused_shapes(self):\n    \"\"\"Removes any unused shapes from network that aren't referenced by links_df.\"\"\"\n    from .shapes.shapes import shape_ids_without_links\n\n    del_shape_ids = shape_ids_without_links(self.shapes_df, self.links_df)\n    self.shapes_df = self.shapes_df.drop(del_shape_ids)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.coerce_crs","title":"coerce_crs(v, info)","text":"

Coerce crs of nodes_df and links_df to network crs.

Source code in network_wrangler/roadway/network.py
@field_validator(\"nodes_df\", \"links_df\")\ndef coerce_crs(cls, v, info):\n    \"\"\"Coerce crs of nodes_df and links_df to network crs.\"\"\"\n    net_crs = info.data[\"crs\"]\n    if v.crs != net_crs:\n        WranglerLogger.warning(\n            f\"CRS of links_df ({v.crs}) doesn't match network crs {net_crs}. \\\n                Changing to network crs.\"\n        )\n        v.to_crs(net_crs)\n    return v\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.delete_links","title":"delete_links(selection_dict, clean_nodes=False, clean_shapes=False)","text":"

Deletes links based on selection dictionary and optionally associated nodes and shapes.

Parameters:

Name Type Description Default selection_dict SelectLinks

Dictionary describing link selections as follows: all: Optional[bool] = False. If true, will select all. name: Optional[list[str]] ref: Optional[list[str]] osm_link_id:Optional[list[str]] model_link_id: Optional[list[int]] modes: Optional[list[str]]. Defaults to \u201cany\u201d ignore_missing: if true, will not error when defaults to True. \u2026plus any other link property to select on top of these.

required clean_nodes bool

If True, will clean nodes uniquely associated with deleted links. Defaults to False.

False clean_shapes bool

If True, will clean nodes uniquely associated with deleted links. Defaults to False.

False Source code in network_wrangler/roadway/network.py
def delete_links(\n    self,\n    selection_dict: SelectLinksDict,\n    clean_nodes: bool = False,\n    clean_shapes: bool = False,\n):\n    \"\"\"Deletes links based on selection dictionary and optionally associated nodes and shapes.\n\n    Args:\n        selection_dict (SelectLinks): Dictionary describing link selections as follows:\n            `all`: Optional[bool] = False. If true, will select all.\n            `name`: Optional[list[str]]\n            `ref`: Optional[list[str]]\n            `osm_link_id`:Optional[list[str]]\n            `model_link_id`: Optional[list[int]]\n            `modes`: Optional[list[str]]. Defaults to \"any\"\n            `ignore_missing`: if true, will not error when defaults to True.\n            ...plus any other link property to select on top of these.\n        clean_nodes (bool, optional): If True, will clean nodes uniquely associated with\n            deleted links. Defaults to False.\n        clean_shapes (bool, optional): If True, will clean nodes uniquely associated with\n            deleted links. Defaults to False.\n    \"\"\"\n    selection_dict = SelectLinksDict(**selection_dict).model_dump(\n        exclude_none=True, by_alias=True\n    )\n    selection = self.get_selection({\"links\": selection_dict})\n\n    if clean_nodes:\n        node_ids_to_delete = node_ids_unique_to_link_ids(\n            selection.selected_links, selection.selected_links_df, self.nodes_df\n        )\n        WranglerLogger.debug(\n            f\"Dropping nodes associated with dropped links: \\n{node_ids_to_delete}\"\n        )\n        self.nodes_df = delete_nodes_by_ids(self.nodes_df, del_node_ids=node_ids_to_delete)\n\n    if clean_shapes:\n        shape_ids_to_delete = shape_ids_unique_to_link_ids(\n            selection.selected_links, selection.selected_links_df, self.shapes_df\n        )\n        WranglerLogger.debug(\n            f\"Dropping shapes associated with dropped links: \\n{shape_ids_to_delete}\"\n        )\n        self.shapes_df = delete_shapes_by_ids(\n            self.shapes_df, del_shape_ids=shape_ids_to_delete\n        )\n\n    self.links_df = delete_links_by_ids(\n        self.links_df,\n        selection.selected_links,\n        ignore_missing=selection.ignore_missing,\n    )\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.delete_nodes","title":"delete_nodes(selection_dict, remove_links=False)","text":"

Deletes nodes from roadway network. Wont delete nodes used by links in network.

Parameters:

Name Type Description Default selection_dict Union[dict, SelectNodesDict]

dictionary of node selection criteria in the form of a SelectNodesDict.

required remove_links bool

if True, will remove any links that are associated with the nodes. If False, will only remove nodes if they are not associated with any links. Defaults to False.

False

Raises:

Type Description NodeDeletionError

If not ignore_missing and selected nodes to delete aren\u2019t in network

Source code in network_wrangler/roadway/network.py
def delete_nodes(\n    self,\n    selection_dict: Union[dict, SelectNodesDict],\n    remove_links: bool = False,\n) -> None:\n    \"\"\"Deletes nodes from roadway network. Wont delete nodes used by links in network.\n\n    Args:\n        selection_dict: dictionary of node selection criteria in the form of a SelectNodesDict.\n        remove_links: if True, will remove any links that are associated with the nodes.\n            If False, will only remove nodes if they are not associated with any links.\n            Defaults to False.\n\n    raises:\n        NodeDeletionError: If not ignore_missing and selected nodes to delete aren't in network\n    \"\"\"\n    if not isinstance(selection_dict, SelectNodesDict):\n        selection_dict = SelectNodesDict(**selection_dict)\n    selection_dict = selection_dict.model_dump(exclude_none=True, by_alias=True)\n    selection: RoadwayNodeSelection = self.get_selection(\n        {\"nodes\": selection_dict},\n    )\n    if remove_links:\n        del_node_ids = selection.selected_nodes\n        link_ids = self.links_with_nodes(selection.selected_nodes).model_link_id.to_list()\n        WranglerLogger.info(f\"Removing {len(link_ids)} links associated with nodes.\")\n        self.delete_links({\"model_link_id\": link_ids})\n    else:\n        unused_node_ids = node_ids_without_links(self.nodes_df, self.links_df)\n        del_node_ids = list(set(selection.selected_nodes).intersection(unused_node_ids))\n\n    self.nodes_df = delete_nodes_by_ids(\n        self.nodes_df, del_node_ids, ignore_missing=selection.ignore_missing\n    )\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.get_modal_graph","title":"get_modal_graph(mode)","text":"

Return a networkx graph of the network for a specific mode.

Parameters:

Name Type Description Default mode

mode of the network, one of drive,transit,walk, bike

required Source code in network_wrangler/roadway/network.py
def get_modal_graph(self, mode) -> MultiDiGraph:\n    \"\"\"Return a networkx graph of the network for a specific mode.\n\n    Args:\n        mode: mode of the network, one of `drive`,`transit`,`walk`, `bike`\n    \"\"\"\n    from .graph import net_to_graph\n\n    if self._modal_graphs[mode][\"hash\"] != self.modal_graph_hash(mode):\n        self._modal_graphs[mode][\"graph\"] = net_to_graph(self, mode)\n\n    return self._modal_graphs[mode][\"graph\"]\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.get_property_by_timespan_and_group","title":"get_property_by_timespan_and_group(link_property, category=DEFAULT_CATEGORY, timespan=DEFAULT_TIMESPAN, strict_timespan_match=False, min_overlap_minutes=60)","text":"

Returns a new dataframe with model_link_id and link property by category and timespan.

Convenience method for backward compatability.

Parameters:

Name Type Description Default link_property str

link property to query

required category Union[str, int]

category to query or a list of categories. Defaults to DEFAULT_CATEGORY.

DEFAULT_CATEGORY timespan TimespanString

timespan to query in the form of [\u201cHH:MM\u201d,\u201dHH:MM\u201d]. Defaults to DEFAULT_TIMESPAN.

DEFAULT_TIMESPAN strict_timespan_match bool

If True, will only return links that match the timespan exactly. Defaults to False.

False min_overlap_minutes int

If strict_timespan_match is False, will return links that overlap with the timespan by at least this many minutes. Defaults to 60.

60 Source code in network_wrangler/roadway/network.py
def get_property_by_timespan_and_group(\n    self,\n    link_property: str,\n    category: Union[str, int] = DEFAULT_CATEGORY,\n    timespan: TimespanString = DEFAULT_TIMESPAN,\n    strict_timespan_match: bool = False,\n    min_overlap_minutes: int = 60,\n) -> Any:\n    \"\"\"Returns a new dataframe with model_link_id and link property by category and timespan.\n\n    Convenience method for backward compatability.\n\n    Args:\n        link_property: link property to query\n        category: category to query or a list of categories. Defaults to DEFAULT_CATEGORY.\n        timespan: timespan to query in the form of [\"HH:MM\",\"HH:MM\"].\n            Defaults to DEFAULT_TIMESPAN.\n        strict_timespan_match: If True, will only return links that match the timespan exactly.\n            Defaults to False.\n        min_overlap_minutes: If strict_timespan_match is False, will return links that overlap\n            with the timespan by at least this many minutes. Defaults to 60.\n    \"\"\"\n    from .links.scopes import prop_for_scope\n\n    return prop_for_scope(\n        self.links_df,\n        link_property,\n        timespan=timespan,\n        category=category,\n        strict_timespan_match=strict_timespan_match,\n        min_overlap_minutes=min_overlap_minutes,\n    )\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.get_selection","title":"get_selection(selection_dict, overwrite=False)","text":"

Return selection if it already exists, otherwise performs selection.

Parameters:

Name Type Description Default selection_dict dict

SelectFacility dictionary.

required overwrite bool

if True, will overwrite any previously cached searches. Defaults to False.

False Source code in network_wrangler/roadway/network.py
def get_selection(\n    self,\n    selection_dict: Union[dict, SelectFacility],\n    overwrite: bool = False,\n) -> Union[RoadwayNodeSelection, RoadwayLinkSelection]:\n    \"\"\"Return selection if it already exists, otherwise performs selection.\n\n    Args:\n        selection_dict (dict): SelectFacility dictionary.\n        overwrite: if True, will overwrite any previously cached searches. Defaults to False.\n    \"\"\"\n    key = _create_selection_key(selection_dict)\n    if (key in self._selections) and not overwrite:\n        WranglerLogger.debug(f\"Using cached selection from key: {key}\")\n        return self._selections[key]\n\n    if isinstance(selection_dict, SelectFacility):\n        selection_data = selection_dict\n    elif isinstance(selection_dict, SelectLinksDict):\n        selection_data = SelectFacility(links=selection_dict)\n    elif isinstance(selection_dict, SelectNodesDict):\n        selection_data = SelectFacility(nodes=selection_dict)\n    elif isinstance(selection_dict, dict):\n        selection_data = SelectFacility(**selection_dict)\n    else:\n        WranglerLogger.error(f\"`selection_dict` arg must be a dictionary or SelectFacility model.\\\n                         Received: {selection_dict} of type {type(selection_dict)}\")\n        raise SelectionError(\"selection_dict arg must be a dictionary or SelectFacility model\")\n\n    WranglerLogger.debug(f\"Getting selection from key: {key}\")\n    if selection_data.feature_types in [\"links\", \"segment\"]:\n        return RoadwayLinkSelection(self, selection_dict)\n    elif selection_data.feature_types == \"nodes\":\n        return RoadwayNodeSelection(self, selection_dict)\n    else:\n        WranglerLogger.error(\"Selection data should be of type 'segment', 'links' or 'nodes'.\")\n        raise SelectionError(\"Selection data should be of type 'segment', 'links' or 'nodes'.\")\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.has_link","title":"has_link(ab)","text":"

Returns true if network has links with AB values.

Parameters:

Name Type Description Default ab tuple

Tuple of values corresponding with A and B.

required Source code in network_wrangler/roadway/network.py
def has_link(self, ab: tuple) -> bool:\n    \"\"\"Returns true if network has links with AB values.\n\n    Args:\n        ab: Tuple of values corresponding with A and B.\n    \"\"\"\n    sel_a, sel_b = ab\n    has_link = self.links_df[self.links_df[[\"A\", \"B\"]]].isin({\"A\": sel_a, \"B\": sel_b}).any()\n    return has_link\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.has_node","title":"has_node(model_node_id)","text":"

Queries if network has node based on model_node_id.

Parameters:

Name Type Description Default model_node_id int

model_node_id to check for.

required Source code in network_wrangler/roadway/network.py
def has_node(self, model_node_id: int) -> bool:\n    \"\"\"Queries if network has node based on model_node_id.\n\n    Args:\n        model_node_id: model_node_id to check for.\n    \"\"\"\n    has_node = self.nodes_df[self.nodes_df.model_node_id].isin([model_node_id]).any()\n\n    return has_node\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.is_connected","title":"is_connected(mode)","text":"

Determines if the network graph is \u201cstrongly\u201d connected.

A graph is strongly connected if each vertex is reachable from every other vertex.

Parameters:

Name Type Description Default mode str

mode of the network, one of drive,transit,walk, bike

required Source code in network_wrangler/roadway/network.py
def is_connected(self, mode: str) -> bool:\n    \"\"\"Determines if the network graph is \"strongly\" connected.\n\n    A graph is strongly connected if each vertex is reachable from every other vertex.\n\n    Args:\n        mode:  mode of the network, one of `drive`,`transit`,`walk`, `bike`\n    \"\"\"\n    is_connected = nx.is_strongly_connected(self.get_modal_graph(mode))\n\n    return is_connected\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.links_with_link_ids","title":"links_with_link_ids(link_ids)","text":"

Return subset of links_df based on link_ids list.

Source code in network_wrangler/roadway/network.py
def links_with_link_ids(self, link_ids: List[int]) -> DataFrame[RoadLinksTable]:\n    \"\"\"Return subset of links_df based on link_ids list.\"\"\"\n    return filter_links_to_ids(self.links_df, link_ids)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.links_with_nodes","title":"links_with_nodes(node_ids)","text":"

Return subset of links_df based on node_ids list.

Source code in network_wrangler/roadway/network.py
def links_with_nodes(self, node_ids: List[int]) -> DataFrame[RoadLinksTable]:\n    \"\"\"Return subset of links_df based on node_ids list.\"\"\"\n    return filter_links_to_node_ids(self.links_df, node_ids)\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.modal_graph_hash","title":"modal_graph_hash(mode)","text":"

Hash of the links in order to detect a network change from when graph created.

Source code in network_wrangler/roadway/network.py
def modal_graph_hash(self, mode) -> str:\n    \"\"\"Hash of the links in order to detect a network change from when graph created.\"\"\"\n    _value = str.encode(self.links_df.df_hash() + \"-\" + mode)\n    _hash = hashlib.sha256(_value).hexdigest()\n\n    return _hash\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.move_nodes","title":"move_nodes(node_geometry_change_table)","text":"

Moves nodes based on updated geometry along with associated links and shape geometry.

Parameters:

Name Type Description Default node_geometry_change_table DataFrame[NodeGeometryChangeTable]

a table with model_node_id, X, Y, and CRS.

required Source code in network_wrangler/roadway/network.py
def move_nodes(\n    self,\n    node_geometry_change_table: DataFrame[NodeGeometryChangeTable],\n):\n    \"\"\"Moves nodes based on updated geometry along with associated links and shape geometry.\n\n    Args:\n        node_geometry_change_table: a table with model_node_id, X, Y, and CRS.\n    \"\"\"\n    node_geometry_change_table = NodeGeometryChangeTable(node_geometry_change_table)\n    node_ids = node_geometry_change_table.model_node_id.to_list()\n    WranglerLogger.debug(f\"Moving nodes: {node_ids}\")\n    self.nodes_df = edit_node_geometry(self.nodes_df, node_geometry_change_table)\n    self.links_df = edit_link_geometry_from_nodes(self.links_df, self.nodes_df, node_ids)\n    self.shapes_df = edit_shape_geometry_from_nodes(\n        self.shapes_df, self.links_df, self.nodes_df, node_ids\n    )\n
"},{"location":"api/#network_wrangler.roadway.network.RoadwayNetwork.nodes_in_links","title":"nodes_in_links()","text":"

Returns subset of self.nodes_df that are in self.links_df.

Source code in network_wrangler/roadway/network.py
def nodes_in_links(self) -> DataFrame[RoadNodesTable]:\n    \"\"\"Returns subset of self.nodes_df that are in self.links_df.\"\"\"\n    return filter_nodes_to_links(self.links_df, self.nodes_df)\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork","title":"TransitNetwork","text":"

Bases: object

Representation of a Transit Network.

Typical usage example:

import network_wrangler as wr\ntc=wr.load_transit(stpaul_gtfs)\n

Attributes:

Name Type Description feed

gtfs feed object with interlinked tables.

road_net RoadwayNetwork

Associated roadway network object.

graph MultiDiGraph

Graph for associated roadway network object.

feed_path str

Where the feed was read in from.

validated_frequencies bool

The frequencies have been validated.

validated_road_network_consistency

The network has been validated against the road network.

Source code in network_wrangler/transit/network.py
class TransitNetwork(object):\n    \"\"\"Representation of a Transit Network.\n\n    Typical usage example:\n    ``` py\n    import network_wrangler as wr\n    tc=wr.load_transit(stpaul_gtfs)\n    ```\n\n    Attributes:\n        feed: gtfs feed object with interlinked tables.\n        road_net (RoadwayNetwork): Associated roadway network object.\n        graph (nx.MultiDiGraph): Graph for associated roadway network object.\n        feed_path (str): Where the feed was read in from.\n        validated_frequencies (bool): The frequencies have been validated.\n        validated_road_network_consistency (): The network has been validated against\n            the road network.\n    \"\"\"\n\n    TIME_COLS = [\"arrival_time\", \"departure_time\", \"start_time\", \"end_time\"]\n\n    def __init__(self, feed: Feed):\n        \"\"\"Constructor for TransitNetwork.\n\n        Args:\n            feed: Feed object representing the transit network gtfs tables\n        \"\"\"\n        WranglerLogger.debug(\"Creating new TransitNetwork.\")\n\n        self._road_net: Optional[RoadwayNetwork] = None\n        self.feed: Feed = feed\n        self.graph: nx.MultiDiGraph = None\n\n        # initialize\n        self._consistent_with_road_net = False\n\n        # cached selections\n        self._selections: dict[str, dict] = {}\n\n    @property\n    def feed_path(self):\n        \"\"\"Pass through property from Feed.\"\"\"\n        return self.feed.feed_path\n\n    @property\n    def config(self):\n        \"\"\"Pass through property from Feed.\"\"\"\n        return self.feed.config\n\n    @property\n    def feed(self):\n        \"\"\"Feed associated with the transit network.\"\"\"\n        return self._feed\n\n    @feed.setter\n    def feed(self, feed: Feed):\n        if not isinstance(feed, Feed):\n            msg = f\"TransitNetwork's feed value must be a valid Feed instance. \\\n                             This is a {type(feed)}.\"\n            WranglerLogger.error(msg)\n            raise ValueError(msg)\n        if self._road_net is None or transit_road_net_consistency(feed, self._road_net):\n            self._feed = feed\n            self._stored_feed_hash = copy.deepcopy(feed.hash)\n        else:\n            WranglerLogger.error(\"Can't assign Feed inconsistent with set Roadway Network.\")\n            raise TransitRoadwayConsistencyError(\n                \"Can't assign Feed inconsistent with set RoadwayNetwork.\"\n            )\n\n    @property\n    def road_net(self) -> RoadwayNetwork:\n        \"\"\"Roadway network associated with the transit network.\"\"\"\n        return self._road_net\n\n    @road_net.setter\n    def road_net(self, road_net: RoadwayNetwork):\n        if not isinstance(road_net, RoadwayNetwork):\n            msg = f\"TransitNetwork's road_net: value must be a valid RoadwayNetwork instance. \\\n                             This is a {type(road_net)}.\"\n            WranglerLogger.error(msg)\n            raise ValueError(msg)\n        if transit_road_net_consistency(self.feed, road_net):\n            self._road_net = road_net\n            self._stored_road_net_hash = copy.deepcopy(self.road_net.network_hash)\n            self._consistent_with_road_net = True\n        else:\n            WranglerLogger.error(\n                \"Can't assign inconsistent RoadwayNetwork - Roadway Network not \\\n                                 set, but can be referenced separately.\"\n            )\n            raise TransitRoadwayConsistencyError(\"Can't assign inconsistent RoadwayNetwork.\")\n\n    @property\n    def feed_hash(self):\n        \"\"\"Return the hash of the feed.\"\"\"\n        return self.feed.hash\n\n    @property\n    def consistent_with_road_net(self) -> bool:\n        \"\"\"Indicate if road_net is consistent with transit network.\n\n        Checks the network hash of when consistency was last evaluated. If transit network or\n        roadway network has changed, will re-evaluate consistency and return the updated value and\n        update self._stored_road_net_hash.\n\n        Returns:\n            Boolean indicating if road_net is consistent with transit network.\n        \"\"\"\n        updated_road = self.road_net.network_hash != self._stored_road_net_hash\n        updated_feed = self.feed_hash != self._stored_feed_hash\n\n        if updated_road or updated_feed:\n            self._consistent_with_road_net = transit_road_net_consistency(self.feed, self.road_net)\n            self._stored_road_net_hash = copy.deepcopy(self.road_net.network_hash)\n            self._stored_feed_hash = copy.deepcopy(self.feed_hash)\n        return self._consistent_with_road_net\n\n    def __deepcopy__(self, memo):\n        \"\"\"Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.\"\"\"\n        COPY_REF_NOT_VALUE = [\"_road_net\"]\n        # Create a new, empty instance\n        copied_net = self.__class__.__new__(self.__class__)\n        # Return the new TransitNetwork instance\n        attribute_dict = vars(self)\n\n        # Copy the attributes to the new instance\n        for attr_name, attr_value in attribute_dict.items():\n            # WranglerLogger.debug(f\"Copying {attr_name}\")\n            if attr_name in COPY_REF_NOT_VALUE:\n                # If the attribute is in the COPY_REF_NOT_VALUE list, assign the reference\n                setattr(copied_net, attr_name, attr_value)\n            else:\n                # WranglerLogger.debug(f\"making deep copy: {attr_name}\")\n                # For other attributes, perform a deep copy\n                setattr(copied_net, attr_name, copy.deepcopy(attr_value, memo))\n\n        return copied_net\n\n    def deepcopy(self):\n        \"\"\"Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.\"\"\"\n        return copy.deepcopy(self)\n\n    @property\n    def stops_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return stops as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n        return to_points_gdf(self.feed.stops, nodes_df=ref_nodes)\n\n    @property\n    def shapes_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return aggregated shapes as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n        return shapes_to_trip_shapes_gdf(self.feed.shapes, ref_nodes_df=ref_nodes)\n\n    @property\n    def shape_links_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return shape-links as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n        return shapes_to_shape_links_gdf(self.feed.shapes, ref_nodes_df=ref_nodes)\n\n    @property\n    def stop_time_links_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return stop-time-links as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n        return stop_times_to_stop_time_links_gdf(\n            self.feed.stop_times, self.feed.stops, ref_nodes_df=ref_nodes\n        )\n\n    @property\n    def stop_times_points_gdf(self) -> gpd.GeoDataFrame:\n        \"\"\"Return stop-time-points as a GeoDataFrame using set roadway geometry.\"\"\"\n        if self.road_net is not None:\n            ref_nodes = self.road_net.nodes_df\n        else:\n            ref_nodes = None\n\n        return stop_times_to_stop_time_points_gdf(\n            self.feed.stop_times, self.feed.stops, ref_nodes_df=ref_nodes\n        )\n\n    def get_selection(\n        self,\n        selection_dict: dict,\n        overwrite: bool = False,\n    ) -> TransitSelection:\n        \"\"\"Return selection if it already exists, otherwise performs selection.\n\n        Will raise an error if no trips found.\n\n        Args:\n            selection_dict (dict): _description_\n            overwrite: if True, will overwrite any previously cached searches. Defaults to False.\n\n        Returns:\n            Selection: Selection object\n        \"\"\"\n        key = dict_to_hexkey(selection_dict)\n\n        if (key not in self._selections) or overwrite:\n            WranglerLogger.debug(f\"Performing selection from key: {key}\")\n            self._selections[key] = TransitSelection(self, selection_dict)\n        else:\n            WranglerLogger.debug(f\"Using cached selection from key: {key}\")\n\n        if not self._selections[key]:\n            WranglerLogger.debug(\n                f\"No links or nodes found for selection dict: \\n {selection_dict}\"\n            )\n            raise ValueError(\"Selection not successful.\")\n        return self._selections[key]\n\n    def apply(self, project_card: Union[ProjectCard, dict], **kwargs) -> \"TransitNetwork\":\n        \"\"\"Wrapper method to apply a roadway project, returning a new TransitNetwork instance.\n\n        Args:\n            project_card: either a dictionary of the project card object or ProjectCard instance\n            **kwargs: keyword arguments to pass to project application\n        \"\"\"\n        if not (isinstance(project_card, ProjectCard) or isinstance(project_card, SubProject)):\n            project_card = ProjectCard(project_card)\n\n        if not project_card.valid:\n            WranglerLogger.error(\"Invalid Project Card: {project_card}\")\n            raise ValueError(f\"Project card {project_card.project} not valid.\")\n\n        if project_card.sub_projects:\n            for sp in project_card.sub_projects:\n                WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n                self._apply_change(sp, **kwargs)\n            return self\n        else:\n            return self._apply_change(project_card, **kwargs)\n\n    def _apply_change(\n        self,\n        change: Union[ProjectCard, SubProject],\n        reference_road_net: Optional[RoadwayNetwork] = None,\n    ) -> TransitNetwork:\n        \"\"\"Apply a single change: a single-project project or a sub-project.\"\"\"\n        if not isinstance(change, SubProject):\n            WranglerLogger.info(f\"Applying Project to Transit Network: {change.project}\")\n\n        if change.change_type == \"transit_property_change\":\n            return apply_transit_property_change(\n                self,\n                self.get_selection(change.service),\n                change.transit_property_change,\n            )\n\n        elif change.change_type == \"transit_routing_change\":\n            return apply_transit_routing_change(\n                self,\n                self.get_selection(change.service),\n                change.transit_routing_change,\n                reference_road_net=reference_road_net,\n            )\n\n        elif change.change_type == \"add_new_route\":\n            return apply_add_transit_route_change(self, change.transit_route_addition)\n\n        elif change.change_type == \"roadway_deletion\":\n            # FIXME\n            raise NotImplementedError(\"Roadway deletion check not yet implemented.\")\n\n        elif change.change_type == \"pycode\":\n            return apply_calculated_transit(self, change.pycode)\n\n        else:\n            msg = f\"Not a currently valid transit project: {change}.\"\n            WranglerLogger.error(msg)\n            raise NotImplementedError(msg)\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.config","title":"config property","text":"

Pass through property from Feed.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.consistent_with_road_net","title":"consistent_with_road_net: bool property","text":"

Indicate if road_net is consistent with transit network.

Checks the network hash of when consistency was last evaluated. If transit network or roadway network has changed, will re-evaluate consistency and return the updated value and update self._stored_road_net_hash.

Returns:

Type Description bool

Boolean indicating if road_net is consistent with transit network.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.feed","title":"feed property writable","text":"

Feed associated with the transit network.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.feed_hash","title":"feed_hash property","text":"

Return the hash of the feed.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.feed_path","title":"feed_path property","text":"

Pass through property from Feed.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.road_net","title":"road_net: RoadwayNetwork property writable","text":"

Roadway network associated with the transit network.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.shape_links_gdf","title":"shape_links_gdf: gpd.GeoDataFrame property","text":"

Return shape-links as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.shapes_gdf","title":"shapes_gdf: gpd.GeoDataFrame property","text":"

Return aggregated shapes as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.stop_time_links_gdf","title":"stop_time_links_gdf: gpd.GeoDataFrame property","text":"

Return stop-time-links as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.stop_times_points_gdf","title":"stop_times_points_gdf: gpd.GeoDataFrame property","text":"

Return stop-time-points as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.stops_gdf","title":"stops_gdf: gpd.GeoDataFrame property","text":"

Return stops as a GeoDataFrame using set roadway geometry.

"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.__deepcopy__","title":"__deepcopy__(memo)","text":"

Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.

Source code in network_wrangler/transit/network.py
def __deepcopy__(self, memo):\n    \"\"\"Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.\"\"\"\n    COPY_REF_NOT_VALUE = [\"_road_net\"]\n    # Create a new, empty instance\n    copied_net = self.__class__.__new__(self.__class__)\n    # Return the new TransitNetwork instance\n    attribute_dict = vars(self)\n\n    # Copy the attributes to the new instance\n    for attr_name, attr_value in attribute_dict.items():\n        # WranglerLogger.debug(f\"Copying {attr_name}\")\n        if attr_name in COPY_REF_NOT_VALUE:\n            # If the attribute is in the COPY_REF_NOT_VALUE list, assign the reference\n            setattr(copied_net, attr_name, attr_value)\n        else:\n            # WranglerLogger.debug(f\"making deep copy: {attr_name}\")\n            # For other attributes, perform a deep copy\n            setattr(copied_net, attr_name, copy.deepcopy(attr_value, memo))\n\n    return copied_net\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.__init__","title":"__init__(feed)","text":"

Constructor for TransitNetwork.

Parameters:

Name Type Description Default feed Feed

Feed object representing the transit network gtfs tables

required Source code in network_wrangler/transit/network.py
def __init__(self, feed: Feed):\n    \"\"\"Constructor for TransitNetwork.\n\n    Args:\n        feed: Feed object representing the transit network gtfs tables\n    \"\"\"\n    WranglerLogger.debug(\"Creating new TransitNetwork.\")\n\n    self._road_net: Optional[RoadwayNetwork] = None\n    self.feed: Feed = feed\n    self.graph: nx.MultiDiGraph = None\n\n    # initialize\n    self._consistent_with_road_net = False\n\n    # cached selections\n    self._selections: dict[str, dict] = {}\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.apply","title":"apply(project_card, **kwargs)","text":"

Wrapper method to apply a roadway project, returning a new TransitNetwork instance.

Parameters:

Name Type Description Default project_card Union[ProjectCard, dict]

either a dictionary of the project card object or ProjectCard instance

required **kwargs

keyword arguments to pass to project application

{} Source code in network_wrangler/transit/network.py
def apply(self, project_card: Union[ProjectCard, dict], **kwargs) -> \"TransitNetwork\":\n    \"\"\"Wrapper method to apply a roadway project, returning a new TransitNetwork instance.\n\n    Args:\n        project_card: either a dictionary of the project card object or ProjectCard instance\n        **kwargs: keyword arguments to pass to project application\n    \"\"\"\n    if not (isinstance(project_card, ProjectCard) or isinstance(project_card, SubProject)):\n        project_card = ProjectCard(project_card)\n\n    if not project_card.valid:\n        WranglerLogger.error(\"Invalid Project Card: {project_card}\")\n        raise ValueError(f\"Project card {project_card.project} not valid.\")\n\n    if project_card.sub_projects:\n        for sp in project_card.sub_projects:\n            WranglerLogger.debug(f\"- applying subproject: {sp.change_type}\")\n            self._apply_change(sp, **kwargs)\n        return self\n    else:\n        return self._apply_change(project_card, **kwargs)\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.deepcopy","title":"deepcopy()","text":"

Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.

Source code in network_wrangler/transit/network.py
def deepcopy(self):\n    \"\"\"Returns copied TransitNetwork instance with deep copy of Feed but not roadway net.\"\"\"\n    return copy.deepcopy(self)\n
"},{"location":"api/#network_wrangler.transit.network.TransitNetwork.get_selection","title":"get_selection(selection_dict, overwrite=False)","text":"

Return selection if it already exists, otherwise performs selection.

Will raise an error if no trips found.

Parameters:

Name Type Description Default selection_dict dict

description

required overwrite bool

if True, will overwrite any previously cached searches. Defaults to False.

False

Returns:

Name Type Description Selection TransitSelection

Selection object

Source code in network_wrangler/transit/network.py
def get_selection(\n    self,\n    selection_dict: dict,\n    overwrite: bool = False,\n) -> TransitSelection:\n    \"\"\"Return selection if it already exists, otherwise performs selection.\n\n    Will raise an error if no trips found.\n\n    Args:\n        selection_dict (dict): _description_\n        overwrite: if True, will overwrite any previously cached searches. Defaults to False.\n\n    Returns:\n        Selection: Selection object\n    \"\"\"\n    key = dict_to_hexkey(selection_dict)\n\n    if (key not in self._selections) or overwrite:\n        WranglerLogger.debug(f\"Performing selection from key: {key}\")\n        self._selections[key] = TransitSelection(self, selection_dict)\n    else:\n        WranglerLogger.debug(f\"Using cached selection from key: {key}\")\n\n    if not self._selections[key]:\n        WranglerLogger.debug(\n            f\"No links or nodes found for selection dict: \\n {selection_dict}\"\n        )\n        raise ValueError(\"Selection not successful.\")\n    return self._selections[key]\n
"},{"location":"api/#network_wrangler.transit.network.TransitRoadwayConsistencyError","title":"TransitRoadwayConsistencyError","text":"

Bases: Exception

Error raised when transit network is inconsistent with roadway network.

Source code in network_wrangler/transit/network.py
class TransitRoadwayConsistencyError(Exception):\n    \"\"\"Error raised when transit network is inconsistent with roadway network.\"\"\"\n\n    pass\n
"},{"location":"api/#parameters","title":"Parameters","text":"

Parameters for Network Wrangler.

"},{"location":"api/#network_wrangler.params.COPY_FROM_GP_TO_ML","title":"COPY_FROM_GP_TO_ML = ['ref', 'roadway', 'access', 'distance', 'bike_access', 'drive_access', 'walk_access', 'bus_only', 'rail_only'] module-attribute","text":"

(list(str)): list of attributes copied from GP lanes to access and egress dummy links.

"},{"location":"api/#network_wrangler.params.COPY_TO_ACCESS_EGRESS","title":"COPY_TO_ACCESS_EGRESS = ['ref', 'ML_access', 'ML_drive_access', 'ML_bus_only', 'ML_rail_only'] module-attribute","text":"

(list(str)): list of attributes that must be provided in managed lanes

"},{"location":"api/#network_wrangler.params.DEFAULT_CATEGORY","title":"DEFAULT_CATEGORY = 'any' module-attribute","text":"

Read sec / MB - WILL DEPEND ON SPECIFIC COMPUTER

"},{"location":"api/#network_wrangler.params.DEFAULT_MAX_SEARCH_BREADTH","title":"DEFAULT_MAX_SEARCH_BREADTH = 10 module-attribute","text":"

Union(int, float)): default penalty assigned for each degree of distance between a link and a link with the searched-for name when searching for paths between A and B node

"},{"location":"api/#network_wrangler.params.DEFAULT_SEARCH_BREADTH","title":"DEFAULT_SEARCH_BREADTH = 5 module-attribute","text":"

(int): default for maximum number of links traversed between links that match the searched name when searching for paths between A and B node

"},{"location":"api/#network_wrangler.params.DEFAULT_SEARCH_MODES","title":"DEFAULT_SEARCH_MODES = ['drive'] module-attribute","text":"

(int): default for initial number of links from name-based selection that are traveresed before trying another shortest path when searching for paths between A and B node

"},{"location":"api/#network_wrangler.params.DEFAULT_SP_WEIGHT_COL","title":"DEFAULT_SP_WEIGHT_COL = 'i' module-attribute","text":"

Default timespan for scoped values.

"},{"location":"api/#network_wrangler.params.DEFAULT_SP_WEIGHT_FACTOR","title":"DEFAULT_SP_WEIGHT_FACTOR = 100 module-attribute","text":"

(str): default column to use as weights in the shortest path calculations.

"},{"location":"api/#network_wrangler.params.DEFAULT_TIMESPAN","title":"DEFAULT_TIMESPAN = ['00:00', '24:00'] module-attribute","text":"

Default category for scoped values.

"},{"location":"api/#network_wrangler.params.EST_PD_READ_SPEED","title":"EST_PD_READ_SPEED = {'csv': 0.03, 'parquet': 0.005, 'geojson': 0.03, 'json': 0.15, 'txt': 0.04} module-attribute","text":"

(list(str)): list of attributes to copy from a general purpose lane to managed lane so long as a ML_ doesn\u2019t exist."},{"location":"api/#network_wrangler.params.MANAGED_LANES_LINK_ID_SCALAR","title":"MANAGED_LANES_LINK_ID_SCALAR = 1000000 module-attribute","text":"

scalar value added to the general purpose lanes\u2019 model_node_id when creating an associated node for a parallel managed lane

"},{"location":"api/#network_wrangler.params.MANAGED_LANES_REQUIRED_ATTRIBUTES","title":"MANAGED_LANES_REQUIRED_ATTRIBUTES = ['A', 'B', 'model_link_id'] module-attribute","text":"

scalar value added to the general purpose lanes\u2019 model_link_id when creating an associated link for a parallel managed lane

"},{"location":"api/#network_wrangler.params.LinksParams","title":"LinksParams dataclass","text":"

Parameters for RoadLinksTable.

Source code in network_wrangler/params.py
@dataclass\nclass LinksParams:\n    \"\"\"Parameters for RoadLinksTable.\"\"\"\n\n    primary_key: str = field(default=\"model_link_id\")\n    _addtl_unique_ids: list[str] = field(default_factory=lambda: [])\n    _addtl_explicit_ids: list[str] = field(default_factory=lambda: [\"osm_link_id\"])\n    from_node: str = field(default=\"A\")\n    to_node: str = field(default=\"B\")\n    fk_to_shape: str = field(default=\"shape_id\")\n    table_type: Literal[\"links\"] = field(default=\"links\")\n    source_file: str = field(default=None)\n    modes_to_network_link_variables: dict = field(\n        default_factory=lambda: MODES_TO_NETWORK_LINK_VARIABLES\n    )\n\n    @property\n    def idx_col(self):\n        \"\"\"Column to make the index of the table.\"\"\"\n        return self.primary_key + \"_idx\"\n\n    @property\n    def fks_to_nodes(self):\n        \"\"\"Foreign keys to nodes in the network.\"\"\"\n        return [self.from_node, self.to_node]\n\n    @property\n    def unique_ids(self) -> List[str]:\n        \"\"\"List of unique ids for the table.\"\"\"\n        _uids = self._addtl_unique_ids + [self.primary_key]\n        return list(set(_uids))\n\n    @property\n    def explicit_ids(self) -> List[str]:\n        \"\"\"List of columns that can be used to easily find specific row sin the table.\"\"\"\n        return list(set(self.unique_ids + self._addtl_explicit_ids))\n\n    @property\n    def display_cols(self) -> List[str]:\n        \"\"\"List of columns to display in the table.\"\"\"\n        _addtl = [\"lanes\"]\n        return list(set(self.explicit_ids + self.fks_to_nodes + _addtl))\n
"},{"location":"api/#network_wrangler.params.LinksParams.display_cols","title":"display_cols: List[str] property","text":"

List of columns to display in the table.

"},{"location":"api/#network_wrangler.params.LinksParams.explicit_ids","title":"explicit_ids: List[str] property","text":"

List of columns that can be used to easily find specific row sin the table.

"},{"location":"api/#network_wrangler.params.LinksParams.fks_to_nodes","title":"fks_to_nodes property","text":"

Foreign keys to nodes in the network.

"},{"location":"api/#network_wrangler.params.LinksParams.idx_col","title":"idx_col property","text":"

Column to make the index of the table.

"},{"location":"api/#network_wrangler.params.LinksParams.unique_ids","title":"unique_ids: List[str] property","text":"

List of unique ids for the table.

"},{"location":"api/#network_wrangler.params.NodesParams","title":"NodesParams dataclass","text":"

Parameters for RoadNodesTable.

Source code in network_wrangler/params.py
@dataclass\nclass NodesParams:\n    \"\"\"Parameters for RoadNodesTable.\"\"\"\n\n    primary_key: str = field(default=\"model_node_id\")\n    _addtl_unique_ids: list[str] = field(default_factory=lambda: [\"osm_node_id\"])\n    _addtl_explicit_ids: list[str] = field(default_factory=lambda: [])\n    source_file: str = field(default=None)\n    table_type: Literal[\"nodes\"] = field(default=\"nodes\")\n    x_field: str = field(default=\"X\")\n    y_field: str = field(default=\"Y\")\n\n    @property\n    def geometry_props(self) -> List[str]:\n        \"\"\"List of geometry properties.\"\"\"\n        return [self.x_field, self.y_field, \"geometry\"]\n\n    @property\n    def idx_col(self) -> str:\n        \"\"\"Column to make the index of the table.\"\"\"\n        return self.primary_key + \"_idx\"\n\n    @property\n    def unique_ids(self) -> List[str]:\n        \"\"\"List of unique ids for the table.\"\"\"\n        _uids = self._addtl_unique_ids + [self.primary_key]\n        return list(set(_uids))\n\n    @property\n    def explicit_ids(self) -> List[str]:\n        \"\"\"List of columns that can be used to easily find specific records the table.\"\"\"\n        _eids = self._addtl_unique_ids + self.unique_ids\n        return list(set(_eids))\n\n    @property\n    def display_cols(self) -> List[str]:\n        \"\"\"Columns to display in the table.\"\"\"\n        return self.explicit_ids\n
"},{"location":"api/#network_wrangler.params.NodesParams.display_cols","title":"display_cols: List[str] property","text":"

Columns to display in the table.

"},{"location":"api/#network_wrangler.params.NodesParams.explicit_ids","title":"explicit_ids: List[str] property","text":"

List of columns that can be used to easily find specific records the table.

"},{"location":"api/#network_wrangler.params.NodesParams.geometry_props","title":"geometry_props: List[str] property","text":"

List of geometry properties.

"},{"location":"api/#network_wrangler.params.NodesParams.idx_col","title":"idx_col: str property","text":"

Column to make the index of the table.

"},{"location":"api/#network_wrangler.params.NodesParams.unique_ids","title":"unique_ids: List[str] property","text":"

List of unique ids for the table.

"},{"location":"api/#network_wrangler.params.ShapesParams","title":"ShapesParams dataclass","text":"

Parameters for RoadShapesTable.

Source code in network_wrangler/params.py
@dataclass\nclass ShapesParams:\n    \"\"\"Parameters for RoadShapesTable.\"\"\"\n\n    primary_key: str = field(default=\"shape_id\")\n    _addtl_unique_ids: list[str] = field(default_factory=lambda: [])\n    table_type: Literal[\"shapes\"] = field(default=\"shapes\")\n    source_file: str = field(default=None)\n\n    @property\n    def idx_col(self) -> str:\n        \"\"\"Column to make the index of the table.\"\"\"\n        return self.primary_key + \"_idx\"\n\n    @property\n    def unique_ids(self) -> list[str]:\n        \"\"\"List of unique ids for the table.\"\"\"\n        return list(set(self._addtl_unique_ids.append(self.primary_key)))\n
"},{"location":"api/#network_wrangler.params.ShapesParams.idx_col","title":"idx_col: str property","text":"

Column to make the index of the table.

"},{"location":"api/#network_wrangler.params.ShapesParams.unique_ids","title":"unique_ids: list[str] property","text":"

List of unique ids for the table.

"},{"location":"api/#projects","title":"Projects","text":"

Projects are how you manipulate the networks. Each project type is defined in a module in the projects folder and accepts a RoadwayNetwork and or TransitNetwork as an input and returns the same objects (manipulated) as an output.

"},{"location":"api/#roadway","title":"Roadway","text":"

The roadway module contains submodules which define and extend the links, nodes, and shapes dataframe objects which within a RoadwayNetwork object as well as other classes and methods which support and extend the RoadwayNetwork class.

"},{"location":"api/#network-objects","title":"Network Objects","text":"

Submodules which define and extend the links, nodes, and shapes dataframe objects which within a RoadwayNetwork object. Includes classes which define:

"},{"location":"api/#links","title":"Links","text":"

:: network_wrangler.roadway.links.io :: network_wrangler.roadway.links.create :: network_wrangler.roadway.links.delete :: network_wrangler.roadway.links.edit :: network_wrangler.roadway.links.filters :: network_wrangler.roadway.links.geo :: network_wrangler.roadway.links.scopes :: network_wrangler.roadway.links.summary :: network_wrangler.roadway.links.validate :: network_wrangler.roadway.links.df_accessors

"},{"location":"api/#nodes","title":"Nodes","text":"

:: network_wrangler.roadway.nodes.io :: network_wrangler.roadway.nodes.create :: network_wrangler.roadway.nodes.delete :: network_wrangler.roadway.nodes.edit :: network_wrangler.roadway.nodes.filters :: network_wrangler.roadway.nodes

"},{"location":"api/#shapes","title":"Shapes","text":"

:: network_wrangler.roadway.shapes.io :: network_wrangler.roadway.shapes.create :: network_wrangler.roadway.shapes.edit :: network_wrangler.roadway.shapes.delete :: network_wrangler.roadway.shapes.filters :: network_wrangler.roadway.shapes.shapes

"},{"location":"api/#supporting-classes-methods-parameters","title":"Supporting Classes, Methods + Parameters","text":"

:: network_wrangler.roadway.segment :: network_wrangler.roadway.subnet :: network_wrangler.roadway.graph

"},{"location":"api/#utils-and-functions","title":"Utils and Functions","text":"

General utility functions used throughout package.

Helper functions for reading and writing files to reduce boilerplate.

Helper functions for data models.

Functions to help with network manipulations in dataframes.

Functions related to parsing and comparing time objects and series.

Internal function terminology for timespan scopes:

Utility functions for pandas data manipulation.

Helper geographic manipulation functions.

Dataframe accessors that allow functions to be called directly on the dataframe.

Logging utilities for Network Wrangler.

"},{"location":"api/#network_wrangler.utils.utils.check_one_or_one_superset_present","title":"check_one_or_one_superset_present(mixed_list, all_fields_present)","text":"

Checks that exactly one of the fields in mixed_list is in fields_present or one superset.

Source code in network_wrangler/utils/utils.py
def check_one_or_one_superset_present(\n    mixed_list: list[Union[str, list[str]]], all_fields_present: list[str]\n) -> bool:\n    \"\"\"Checks that exactly one of the fields in mixed_list is in fields_present or one superset.\"\"\"\n    normalized_list = normalize_to_lists(mixed_list)\n\n    list_items_present = [i for i in normalized_list if set(i).issubset(all_fields_present)]\n\n    if len(list_items_present) == 1:\n        return True\n\n    return list_elements_subset_of_single_element(list_items_present)\n
"},{"location":"api/#network_wrangler.utils.utils.combine_unique_unhashable_list","title":"combine_unique_unhashable_list(list1, list2)","text":"

Combines lists preserving order of first and removing duplicates.

Parameters:

Name Type Description Default list1 list

The first list.

required list2 list

The second list.

required

Returns:

Name Type Description list

A new list containing the elements from list1 followed by the

unique elements from list2.

Example

list1 = [1, 2, 3] list2 = [2, 3, 4, 5] combine_unique_unhashable_list(list1, list2) [1, 2, 3, 4, 5]

Source code in network_wrangler/utils/utils.py
def combine_unique_unhashable_list(list1: list, list2: list):\n    \"\"\"Combines lists preserving order of first and removing duplicates.\n\n    Args:\n        list1 (list): The first list.\n        list2 (list): The second list.\n\n    Returns:\n        list: A new list containing the elements from list1 followed by the\n        unique elements from list2.\n\n    Example:\n        >>> list1 = [1, 2, 3]\n        >>> list2 = [2, 3, 4, 5]\n        >>> combine_unique_unhashable_list(list1, list2)\n        [1, 2, 3, 4, 5]\n    \"\"\"\n    return [item for item in list1 if item not in list2] + list2\n
"},{"location":"api/#network_wrangler.utils.utils.delete_keys_from_dict","title":"delete_keys_from_dict(dictionary, keys)","text":"

Removes list of keys from potentially nested dictionary.

SOURCE: https://stackoverflow.com/questions/3405715/ User: @mseifert

Parameters:

Name Type Description Default dictionary dict

dictionary to remove keys from

required keys list

list of keys to remove

required Source code in network_wrangler/utils/utils.py
def delete_keys_from_dict(dictionary: dict, keys: list) -> dict:\n    \"\"\"Removes list of keys from potentially nested dictionary.\n\n    SOURCE: https://stackoverflow.com/questions/3405715/\n    User: @mseifert\n\n    Args:\n        dictionary: dictionary to remove keys from\n        keys: list of keys to remove\n\n    \"\"\"\n    keys_set = set(keys)  # Just an optimization for the \"if key in keys\" lookup.\n\n    modified_dict = {}\n    for key, value in dictionary.items():\n        if key not in keys_set:\n            if isinstance(value, dict):\n                modified_dict[key] = delete_keys_from_dict(value, keys_set)\n            else:\n                modified_dict[key] = (\n                    value  # or copy.deepcopy(value) if a copy is desired for non-dicts.\n                )\n    return modified_dict\n
"},{"location":"api/#network_wrangler.utils.utils.dict_to_hexkey","title":"dict_to_hexkey(d)","text":"

Converts a dictionary to a hexdigest of the sha1 hash of the dictionary.

Parameters:

Name Type Description Default d dict

dictionary to convert to string

required

Returns:

Name Type Description str str

hexdigest of the sha1 hash of dictionary

Source code in network_wrangler/utils/utils.py
def dict_to_hexkey(d: dict) -> str:\n    \"\"\"Converts a dictionary to a hexdigest of the sha1 hash of the dictionary.\n\n    Args:\n        d (dict): dictionary to convert to string\n\n    Returns:\n        str: hexdigest of the sha1 hash of dictionary\n    \"\"\"\n    return hashlib.sha1(str(d).encode()).hexdigest()\n
"},{"location":"api/#network_wrangler.utils.utils.findkeys","title":"findkeys(node, kv)","text":"

Returns values of all keys in various objects.

Adapted from arainchi on Stack Overflow: https://stackoverflow.com/questions/9807634/find-all-occurrences-of-a-key-in-nested-dictionaries-and-lists

Source code in network_wrangler/utils/utils.py
def findkeys(node, kv):\n    \"\"\"Returns values of all keys in various objects.\n\n    Adapted from arainchi on Stack Overflow:\n    https://stackoverflow.com/questions/9807634/find-all-occurrences-of-a-key-in-nested-dictionaries-and-lists\n    \"\"\"\n    if isinstance(node, list):\n        for i in node:\n            for x in findkeys(i, kv):\n                yield x\n    elif isinstance(node, dict):\n        if kv in node:\n            yield node[kv]\n        for j in node.values():\n            for x in findkeys(j, kv):\n                yield x\n
"},{"location":"api/#network_wrangler.utils.utils.generate_list_of_new_ids","title":"generate_list_of_new_ids(input_ids, existing_ids, id_scalar, iter_val=10, max_iter=1000)","text":"

Generates a list of new IDs based on the input IDs, existing IDs, and other parameters.

Parameters:

Name Type Description Default input_ids list[str]

The input IDs for which new IDs need to be generated.

required existing_ids Series

The existing IDs that should be avoided when generating new IDs.

required id_scalar int

The scalar value used to generate new IDs.

required iter_val int

The iteration value used in the generation process. Defaults to 10.

10 max_iter int

The maximum number of iterations allowed in the generation process. Defaults to 1000.

1000

Returns:

Type Description list[str]

list[str]: A list of new IDs generated based on the input IDs and other parameters.

Source code in network_wrangler/utils/utils.py
def generate_list_of_new_ids(\n    input_ids: list[str],\n    existing_ids: pd.Series,\n    id_scalar: int,\n    iter_val: int = 10,\n    max_iter: int = 1000,\n) -> list[str]:\n    \"\"\"Generates a list of new IDs based on the input IDs, existing IDs, and other parameters.\n\n    Args:\n        input_ids (list[str]): The input IDs for which new IDs need to be generated.\n        existing_ids (pd.Series): The existing IDs that should be avoided when generating new IDs.\n        id_scalar (int): The scalar value used to generate new IDs.\n        iter_val (int, optional): The iteration value used in the generation process.\n            Defaults to 10.\n        max_iter (int, optional): The maximum number of iterations allowed in the generation\n            process. Defaults to 1000.\n\n    Returns:\n        list[str]: A list of new IDs generated based on the input IDs and other parameters.\n    \"\"\"\n    # keep new_ids as list to preserve order\n    new_ids = []\n    existing_ids = set(existing_ids)\n    for i in input_ids:\n        new_id = generate_new_id(\n            i,\n            pd.Series(list(existing_ids)),\n            id_scalar,\n            iter_val=iter_val,\n            max_iter=max_iter,\n        )\n        new_ids.append(new_id)\n        existing_ids.add(new_id)\n    return new_ids\n
"},{"location":"api/#network_wrangler.utils.utils.generate_new_id","title":"generate_new_id(input_id, existing_ids, id_scalar, iter_val=10, max_iter=1000)","text":"

Generate a new ID that isn\u2019t in existing_ids.

TODO: check a registry rather than existing IDs

Parameters:

Name Type Description Default input_id str

id to use to generate new id.

required existing_ids Series

series that has existing IDs that should be unique

required id_scalar int

scalar value to initially use to create the new id.

required iter_val int

iteration value to use in the generation process.

10 max_iter int

maximum number of iterations allowed in the generation process.

1000 Source code in network_wrangler/utils/utils.py
def generate_new_id(\n    input_id: str,\n    existing_ids: pd.Series,\n    id_scalar: int,\n    iter_val: int = 10,\n    max_iter: int = 1000,\n) -> str:\n    \"\"\"Generate a new ID that isn't in existing_ids.\n\n    TODO: check a registry rather than existing IDs\n\n    Args:\n        input_id: id to use to generate new id.\n        existing_ids: series that has existing IDs that should be unique\n        id_scalar: scalar value to initially use to create the new id.\n        iter_val: iteration value to use in the generation process.\n        max_iter: maximum number of iterations allowed in the generation process.\n    \"\"\"\n    str_prefix, input_id, str_suffix = split_string_prefix_suffix_from_num(input_id)\n\n    for i in range(1, max_iter + 1):\n        new_id = f\"{str_prefix}{int(input_id) + id_scalar + (iter_val * i)}{str_suffix}\"\n        if new_id not in existing_ids.values:\n            return new_id\n        elif i == max_iter:\n            WranglerLogger.error(f\"Cannot generate new id within max iters of {max_iter}.\")\n            raise ValueError(\"Cannot create unique new id.\")\n
"},{"location":"api/#network_wrangler.utils.utils.get_overlapping_range","title":"get_overlapping_range(ranges)","text":"

Returns the overlapping range for a list of ranges or tuples defining ranges.

Parameters:

Name Type Description Default ranges list[Union[tuple[int], range]]

A list of ranges or tuples defining ranges.

required

Returns:

Type Description Union[None, range]

Union[None, range]: The overlapping range if found, otherwise None.

Example

ranges = [(1, 5), (3, 7), (6, 10)] get_overlapping_range(ranges) range(3, 5)

Source code in network_wrangler/utils/utils.py
def get_overlapping_range(ranges: list[Union[tuple[int], range]]) -> Union[None, range]:\n    \"\"\"Returns the overlapping range for a list of ranges or tuples defining ranges.\n\n    Args:\n        ranges (list[Union[tuple[int], range]]): A list of ranges or tuples defining ranges.\n\n    Returns:\n        Union[None, range]: The overlapping range if found, otherwise None.\n\n    Example:\n        >>> ranges = [(1, 5), (3, 7), (6, 10)]\n        >>> get_overlapping_range(ranges)\n        range(3, 5)\n\n    \"\"\"\n    _ranges = [r if isinstance(r, range) else range(r[0], r[1]) for r in ranges]\n\n    _overlap_start = max(r.start for r in _ranges)\n    _overlap_end = min(r.stop for r in _ranges)\n\n    if _overlap_start < _overlap_end:\n        return range(_overlap_start, _overlap_end)\n    else:\n        return None\n
"},{"location":"api/#network_wrangler.utils.utils.list_elements_subset_of_single_element","title":"list_elements_subset_of_single_element(mixed_list)","text":"

Find the first list in the mixed_list.

Source code in network_wrangler/utils/utils.py
def list_elements_subset_of_single_element(mixed_list: list[Union[str, list[str]]]) -> bool:\n    \"\"\"Find the first list in the mixed_list.\"\"\"\n    potential_supersets = []\n    for item in mixed_list:\n        if isinstance(item, list) and len(item) > 0:\n            potential_supersets.append(set(item))\n\n    # If no list is found, return False\n    if not potential_supersets:\n        return False\n\n    normalized_list = normalize_to_lists(mixed_list)\n\n    valid_supersets = []\n    for ss in potential_supersets:\n        if all(ss.issuperset(i) for i in normalized_list):\n            valid_supersets.append(ss)\n\n    return len(valid_supersets) == 1\n
"},{"location":"api/#network_wrangler.utils.utils.make_slug","title":"make_slug(text, delimiter='_')","text":"

Makes a slug from text.

Source code in network_wrangler/utils/utils.py
def make_slug(text: str, delimiter: str = \"_\") -> str:\n    \"\"\"Makes a slug from text.\"\"\"\n    text = re.sub(\"[,.;@#?!&$']+\", \"\", text.lower())\n    return re.sub(\"[\\ ]+\", delimiter, text)  # noqa: W605\n
"},{"location":"api/#network_wrangler.utils.utils.normalize_to_lists","title":"normalize_to_lists(mixed_list)","text":"

Turn a mixed list of scalars and lists into a list of lists.

Source code in network_wrangler/utils/utils.py
def normalize_to_lists(mixed_list: list[Union[str, list]]) -> list[list]:\n    \"\"\"Turn a mixed list of scalars and lists into a list of lists.\"\"\"\n    normalized_list = []\n    for item in mixed_list:\n        if isinstance(item, str):\n            normalized_list.append([item])\n        else:\n            normalized_list.append(item)\n    return normalized_list\n
"},{"location":"api/#network_wrangler.utils.utils.split_string_prefix_suffix_from_num","title":"split_string_prefix_suffix_from_num(input_string)","text":"

Split a string prefix and suffix from last number.

Parameters:

Name Type Description Default input_string str

The input string to be processed.

required

Returns:

Name Type Description tuple

A tuple containing the prefix (including preceding numbers), the last numeric part as an integer, and the suffix.

Notes

This function uses regular expressions to split a string into three parts: the prefix, the last numeric part, and the suffix. The prefix includes any preceding numbers, the last numeric part is converted to an integer, and the suffix includes any non-digit characters after the last numeric part.

Examples:

>>> split_string_prefix_suffix_from_num(\"abc123def456\")\n('abc', 123, 'def456')\n
>>> split_string_prefix_suffix_from_num(\"hello\")\n('hello', 0, '')\n
>>> split_string_prefix_suffix_from_num(\"123\")\n('', 123, '')\n
Source code in network_wrangler/utils/utils.py
def split_string_prefix_suffix_from_num(input_string: str):\n    \"\"\"Split a string prefix and suffix from *last* number.\n\n    Args:\n        input_string (str): The input string to be processed.\n\n    Returns:\n        tuple: A tuple containing the prefix (including preceding numbers),\n               the last numeric part as an integer, and the suffix.\n\n    Notes:\n        This function uses regular expressions to split a string into three parts:\n        the prefix, the last numeric part, and the suffix. The prefix includes any\n        preceding numbers, the last numeric part is converted to an integer, and\n        the suffix includes any non-digit characters after the last numeric part.\n\n    Examples:\n        >>> split_string_prefix_suffix_from_num(\"abc123def456\")\n        ('abc', 123, 'def456')\n\n        >>> split_string_prefix_suffix_from_num(\"hello\")\n        ('hello', 0, '')\n\n        >>> split_string_prefix_suffix_from_num(\"123\")\n        ('', 123, '')\n\n    \"\"\"\n    input_string = str(input_string)\n    pattern = re.compile(r\"(.*?)(\\d+)(\\D*)$\")\n    match = pattern.match(input_string)\n\n    if match:\n        # Extract the groups: prefix (including preceding numbers), last numeric part, suffix\n        prefix, numeric_part, suffix = match.groups()\n        # Convert the numeric part to an integer\n        num_variable = int(numeric_part)\n        return prefix, num_variable, suffix\n    else:\n        return input_string, 0, \"\"\n
"},{"location":"api/#network_wrangler.utils.utils.topological_sort","title":"topological_sort(adjacency_list, visited_list)","text":"

Topological sorting for Acyclic Directed Graph.

Parameters: - adjacency_list (dict): A dictionary representing the adjacency list of the graph. - visited_list (list): A list representing the visited status of each vertex in the graph.

Returns: - output_stack (list): A list containing the vertices in topological order.

This function performs a topological sort on an acyclic directed graph. It takes an adjacency list and a visited list as input. The adjacency list represents the connections between vertices in the graph, and the visited list keeps track of the visited status of each vertex.

The function uses a recursive helper function to perform the topological sort. It starts by iterating over each vertex in the visited list. For each unvisited vertex, it calls the helper function, which recursively visits all the neighbors of the vertex and adds them to the output stack in reverse order. Finally, it returns the output stack, which contains the vertices in topological order.

Source code in network_wrangler/utils/utils.py
def topological_sort(adjacency_list, visited_list):\n    \"\"\"Topological sorting for Acyclic Directed Graph.\n\n    Parameters:\n    - adjacency_list (dict): A dictionary representing the adjacency list of the graph.\n    - visited_list (list): A list representing the visited status of each vertex in the graph.\n\n    Returns:\n    - output_stack (list): A list containing the vertices in topological order.\n\n    This function performs a topological sort on an acyclic directed graph. It takes an adjacency\n    list and a visited list as input. The adjacency list represents the connections between\n    vertices in the graph, and the visited list keeps track of the visited status of each vertex.\n\n    The function uses a recursive helper function to perform the topological sort. It starts by\n    iterating over each vertex in the visited list. For each unvisited vertex, it calls the helper\n    function, which recursively visits all the neighbors of the vertex and adds them to the output\n    stack in reverse order. Finally, it returns the output stack, which contains the vertices in\n    topological order.\n    \"\"\"\n    output_stack = []\n\n    def _topology_sort_util(vertex):\n        if not visited_list[vertex]:\n            visited_list[vertex] = True\n            for neighbor in adjacency_list[vertex]:\n                _topology_sort_util(neighbor)\n            output_stack.insert(0, vertex)\n\n    for vertex in visited_list:\n        _topology_sort_util(vertex)\n\n    return output_stack\n
"},{"location":"api/#network_wrangler.utils.io.FileReadError","title":"FileReadError","text":"

Bases: Exception

Raised when there is an error reading a file.

Source code in network_wrangler/utils/io.py
class FileReadError(Exception):\n    \"\"\"Raised when there is an error reading a file.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.io.FileWriteError","title":"FileWriteError","text":"

Bases: Exception

Raised when there is an error writing a file.

Source code in network_wrangler/utils/io.py
class FileWriteError(Exception):\n    \"\"\"Raised when there is an error writing a file.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.io.read_table","title":"read_table(filename, sub_filename=None)","text":"

Read file and return a dataframe or geodataframe.

If filename is a zip file, will unzip to a temporary directory.

NOTE: if you are accessing multiple files from this zip file you will want to unzip it first and THEN access the table files so you don\u2019t create multiple duplicate unzipped tmp dirs.

Parameters:

Name Type Description Default filename Path

filename to load.

required sub_filename str

if the file is a zip, the sub_filename to load.

None Source code in network_wrangler/utils/io.py
def read_table(filename: Path, sub_filename: str = None) -> Union[pd.DataFrame, gpd.GeoDataFrame]:\n    \"\"\"Read file and return a dataframe or geodataframe.\n\n    If filename is a zip file, will unzip to a temporary directory.\n\n    NOTE:  if you are accessing multiple files from this zip file you will want to unzip it first\n    and THEN access the table files so you don't create multiple duplicate unzipped tmp dirs.\n\n    Args:\n        filename (Path): filename to load.\n        sub_filename: if the file is a zip, the sub_filename to load.\n\n    \"\"\"\n    filename = Path(filename)\n    if filename.suffix == \".zip\":\n        filename = unzip_file(filename) / sub_filename\n    WranglerLogger.debug(f\"Estimated read time: {_estimate_read_time_of_file(filename)}.\")\n    if any([x in filename.suffix for x in [\"geojson\", \"shp\", \"csv\"]]):\n        try:\n            return gpd.read_file(filename)\n        except:  # noqa: E722\n            if \"csv\" in filename.suffix:\n                return pd.read_csv(filename)\n            raise FileReadError\n    elif \"parquet\" in filename.suffix:\n        try:\n            return gpd.read_parquet(filename)\n        except:  # noqa: E722\n            return pd.read_parquet(filename)\n    elif \"json\" in filename.suffix:\n        with open(filename) as f:\n            return pd.read_json(f, orient=\"records\")\n    raise NotImplementedError(f\"Filetype {filename.suffix} not implemented.\")\n
"},{"location":"api/#network_wrangler.utils.io.unzip_file","title":"unzip_file(path)","text":"

Unzips a file to a temporary directory and returns the directory path.

Source code in network_wrangler/utils/io.py
def unzip_file(path: Path) -> Path:\n    \"\"\"Unzips a file to a temporary directory and returns the directory path.\"\"\"\n    tmpdir = tempfile.mkdtemp()\n    shutil.unpack_archive(path, tmpdir)\n\n    def finalize() -> None:\n        shutil.rmtree(tmpdir)\n\n    # Lazy cleanup\n    weakref.finalize(tmpdir, finalize)\n\n    return tmpdir\n
"},{"location":"api/#network_wrangler.utils.io.write_table","title":"write_table(df, filename, overwrite=False, **kwargs)","text":"

Write a dataframe or geodataframe to a file.

Parameters:

Name Type Description Default df DataFrame

dataframe to write.

required filename Path

filename to write to.

required overwrite bool

whether to overwrite the file if it exists. Defaults to False.

False kwargs

additional arguments to pass to the writer.

{} Source code in network_wrangler/utils/io.py
def write_table(\n    df: Union[pd.DataFrame, gpd.GeoDataFrame],\n    filename: Path,\n    overwrite: bool = False,\n    **kwargs,\n) -> None:\n    \"\"\"Write a dataframe or geodataframe to a file.\n\n    Args:\n        df (pd.DataFrame): dataframe to write.\n        filename (Path): filename to write to.\n        overwrite (bool): whether to overwrite the file if it exists. Defaults to False.\n        kwargs: additional arguments to pass to the writer.\n\n    \"\"\"\n    filename = Path(filename)\n    if filename.exists() and not overwrite:\n        raise FileExistsError(f\"File {filename} already exists and overwrite is False.\")\n\n    if filename.parent.is_dir() and not filename.parent.exists():\n        filename.parent.mkdir(parents=True)\n\n    WranglerLogger.debug(f\"Writing to {filename}.\")\n\n    if \"shp\" in filename.suffix:\n        df.to_file(filename, index=False, **kwargs)\n    elif \"parquet\" in filename.suffix:\n        df.to_parquet(filename, index=False, **kwargs)\n    elif \"csv\" in filename.suffix:\n        df.to_csv(filename, index=False, date_format=\"%H:%M:%S\", **kwargs)\n    elif \"txt\" in filename.suffix:\n        df.to_csv(filename, index=False, date_format=\"%H:%M:%S\", **kwargs)\n    elif \"geojson\" in filename.suffix:\n        # required due to issues with list-like columns\n        if isinstance(df, gpd.GeoDataFrame):\n            data = df.to_json(drop_id=True)\n        else:\n            data = df.to_json(orient=\"records\", index=False)\n        with open(filename, \"w\", encoding=\"utf-8\") as file:\n            file.write(data)\n    elif \"json\" in filename.suffix:\n        with open(filename, \"w\") as f:\n            f.write(df.to_json(orient=\"records\"))\n    else:\n        raise NotImplementedError(f\"Filetype {filename.suffix} not implemented.\")\n
"},{"location":"api/#network_wrangler.utils.models.DatamodelDataframeIncompatableError","title":"DatamodelDataframeIncompatableError","text":"

Bases: Exception

Raised when a data model and a dataframe are not compatable.

Source code in network_wrangler/utils/models.py
class DatamodelDataframeIncompatableError(Exception):\n    \"\"\"Raised when a data model and a dataframe are not compatable.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.models.coerce_extra_fields_to_type_in_df","title":"coerce_extra_fields_to_type_in_df(data, model, df)","text":"

Coerce extra fields in data that aren\u2019t specified in Pydantic model to the type in the df.

Note: will not coerce lists of submodels, etc.

Parameters:

Name Type Description Default data dict

The data to coerce.

required model BaseModel

The Pydantic model to validate the data against.

required df DataFrame

The DataFrame to coerce the data to.

required Source code in network_wrangler/utils/models.py
def coerce_extra_fields_to_type_in_df(\n    data: BaseModel, model: BaseModel, df: pd.DataFrame\n) -> BaseModel:\n    \"\"\"Coerce extra fields in data that aren't specified in Pydantic model to the type in the df.\n\n    Note: will not coerce lists of submodels, etc.\n\n    Args:\n        data (dict): The data to coerce.\n        model (BaseModel): The Pydantic model to validate the data against.\n        df (pd.DataFrame): The DataFrame to coerce the data to.\n    \"\"\"\n    out_data = copy.deepcopy(data)\n\n    # Coerce submodels\n    for field in submodel_fields_in_model(model, data):\n        out_data.__dict__[field] = coerce_extra_fields_to_type_in_df(\n            data.__dict__[field], model.__annotations__[field], df\n        )\n\n    for field in extra_attributes_undefined_in_model(data, model):\n        try:\n            v = coerce_val_to_df_types(field, data.model_extra[field], df)\n        except ValueError as e:\n            raise DatamodelDataframeIncompatableError(e)\n        out_data.model_extra[field] = v\n    return out_data\n
"},{"location":"api/#network_wrangler.utils.models.default_from_datamodel","title":"default_from_datamodel(data_model, field)","text":"

Returns default value from pandera data model for a given field name.

Source code in network_wrangler/utils/models.py
def default_from_datamodel(data_model: pa.DataFrameModel, field: str):\n    \"\"\"Returns default value from pandera data model for a given field name.\"\"\"\n    if field in data_model.__fields__:\n        return data_model.__fields__[field][1].default\n    return None\n
"},{"location":"api/#network_wrangler.utils.models.empty_df_from_datamodel","title":"empty_df_from_datamodel(model, crs=LAT_LON_CRS)","text":"

Create an empty DataFrame or GeoDataFrame with the specified columns.

Parameters:

Name Type Description Default model BaseModel

A pandera data model to create empty [Geo]DataFrame from.

required crs int

if schema has geometry, will use this as the geometry\u2019s crs. Defaults to LAT_LONG_CRS

LAT_LON_CRS Source code in network_wrangler/utils/models.py
def empty_df_from_datamodel(\n    model: DataFrameModel, crs: int = LAT_LON_CRS\n) -> Union[gpd.GeoDataFrame, pd.DataFrame]:\n    \"\"\"Create an empty DataFrame or GeoDataFrame with the specified columns.\n\n    Args:\n        model (BaseModel): A pandera data model to create empty [Geo]DataFrame from.\n        crs: if schema has geometry, will use this as the geometry's crs. Defaults to LAT_LONG_CRS\n    Returns:\n        An empty [Geo]DataFrame that validates to the specified model.\n    \"\"\"\n    schema = model.to_schema()\n    data = {col: [] for col in schema.columns.keys()}\n\n    if \"geometry\" in data:\n        return model(gpd.GeoDataFrame(data, crs=crs))\n\n    return model(pd.DataFrame(data))\n
"},{"location":"api/#network_wrangler.utils.models.extra_attributes_undefined_in_model","title":"extra_attributes_undefined_in_model(instance, model)","text":"

Find the extra attributes in a pydantic model that are not defined in the model.

Source code in network_wrangler/utils/models.py
def extra_attributes_undefined_in_model(instance: BaseModel, model: BaseModel) -> list:\n    \"\"\"Find the extra attributes in a pydantic model that are not defined in the model.\"\"\"\n    defined_fields = model.model_fields\n    all_attributes = list(instance.model_dump(exclude_none=True, by_alias=True).keys())\n    extra_attributes = [a for a in all_attributes if a not in defined_fields]\n    return extra_attributes\n
"},{"location":"api/#network_wrangler.utils.models.identify_model","title":"identify_model(data, models)","text":"

Identify the model that the input data conforms to.

Parameters:

Name Type Description Default data Union[DataFrame, dict]

The input data to identify.

required models list[DataFrameModel, BaseModel]

A list of models to validate the input data against.

required Source code in network_wrangler/utils/models.py
def identify_model(\n    data: Union[pd.DataFrame, dict], models: list[DataFrameModel, BaseModel]\n) -> Union[DataFrameModel, BaseModel]:\n    \"\"\"Identify the model that the input data conforms to.\n\n    Args:\n        data (Union[pd.DataFrame, dict]): The input data to identify.\n        models (list[DataFrameModel,BaseModel]): A list of models to validate the input\n          data against.\n    \"\"\"\n    for m in models:\n        try:\n            if isinstance(data, pd.DataFrame):\n                m.validate(data)\n            else:\n                m(**data)\n            return m\n        except ValidationError:\n            continue\n        except SchemaError:\n            continue\n\n    WranglerLogger.error(\n        f\"The input data isn't consistant with any provided data model.\\\n                         \\nInput data: {data}\\\n                         \\nData Models: {models}\"\n    )\n    raise ValueError(\"The input dictionary does not conform to any of the provided models.\")\n
"},{"location":"api/#network_wrangler.utils.models.submodel_fields_in_model","title":"submodel_fields_in_model(model, instance=None)","text":"

Find the fields in a pydantic model that are submodels.

Source code in network_wrangler/utils/models.py
def submodel_fields_in_model(model: BaseModel, instance: Optional[BaseModel] = None) -> list:\n    \"\"\"Find the fields in a pydantic model that are submodels.\"\"\"\n    types = get_type_hints(model)\n    model_type = Union[ModelMetaclass, BaseModel]\n    submodels = [f for f in model.model_fields if isinstance(types.get(f), model_type)]\n    if instance is not None:\n        defined = list(instance.model_dump(exclude_none=True, by_alias=True).keys())\n        return [f for f in submodels if f in defined]\n    return submodels\n
"},{"location":"api/#network_wrangler.utils.net.point_seq_to_links","title":"point_seq_to_links(point_seq_df, id_field, seq_field, node_id_field, from_field='A', to_field='B')","text":"

Translates a df with tidy data representing a sequence of points into links.

Parameters:

Name Type Description Default point_seq_df DataFrame

Dataframe with source breadcrumbs

required id_field str

Trace ID

required seq_field str

Order of breadcrumbs within ID_field

required node_id_field str

field denoting the node ID

required from_field str

Field to export from_field to. Defaults to \u201cA\u201d.

'A' to_field str

Field to export to_field to. Defaults to \u201cB\u201d.

'B'

Returns:

Type Description DataFrame

pd.DataFrame: Link records with id_field, from_field, to_field

Source code in network_wrangler/utils/net.py
def point_seq_to_links(\n    point_seq_df: DataFrame,\n    id_field: str,\n    seq_field: str,\n    node_id_field: str,\n    from_field: str = \"A\",\n    to_field: str = \"B\",\n) -> DataFrame:\n    \"\"\"Translates a df with tidy data representing a sequence of points into links.\n\n    Args:\n        point_seq_df (pd.DataFrame): Dataframe with source breadcrumbs\n        id_field (str): Trace ID\n        seq_field (str): Order of breadcrumbs within ID_field\n        node_id_field (str): field denoting the node ID\n        from_field (str, optional): Field to export from_field to. Defaults to \"A\".\n        to_field (str, optional): Field to export to_field to. Defaults to \"B\".\n\n    Returns:\n        pd.DataFrame: Link records with id_field, from_field, to_field\n    \"\"\"\n    point_seq_df = point_seq_df.sort_values(by=[id_field, seq_field])\n\n    links = point_seq_df.add_suffix(f\"_{from_field}\").join(\n        point_seq_df.shift(-1).add_suffix(f\"_{to_field}\")\n    )\n\n    links = links[links[f\"{id_field}_{to_field}\"] == links[f\"{id_field}_{from_field}\"]]\n\n    links = links.drop(columns=[f\"{id_field}_{to_field}\"])\n    links = links.rename(\n        columns={\n            f\"{id_field}_{from_field}\": id_field,\n            f\"{node_id_field}_{from_field}\": from_field,\n            f\"{node_id_field}_{to_field}\": to_field,\n        }\n    )\n\n    links = links.dropna(subset=[from_field, to_field])\n    # Since join with a shift() has some NAs, we need to recast the columns to int\n    _int_cols = [to_field, f\"{seq_field}_{to_field}\"]\n    links[_int_cols] = links[_int_cols].astype(int)\n    return links\n
"},{"location":"api/#network_wrangler.utils.time.convert_timespan_to_start_end_dt","title":"convert_timespan_to_start_end_dt(timespan_s)","text":"

Covert a timespan string [\u201812:00\u2019,\u201814:00] to start_time and end_time datetime cols in df.

Source code in network_wrangler/utils/time.py
def convert_timespan_to_start_end_dt(timespan_s: pd.Series) -> pd.DataFrame:\n    \"\"\"Covert a timespan string ['12:00','14:00] to start_time and end_time datetime cols in df.\"\"\"\n    start_time = timespan_s.apply(lambda x: str_to_time(x[0]))\n    end_time = timespan_s.apply(lambda x: str_to_time(x[1]))\n    return pd.DataFrame({\"start_time\": start_time, \"end_time\": end_time})\n
"},{"location":"api/#network_wrangler.utils.time.dt_contains","title":"dt_contains(timespan1, timespan2)","text":"

Check if one timespan inclusively contains another.

Parameters:

Name Type Description Default timespan1 list[time]

The first timespan represented as a list containing the start time and end time.

required timespan2 list[time]

The second timespan represented as a list containing the start time and end time.

required

Returns:

Name Type Description bool bool

True if the first timespan contains the second timespan, False otherwise.

Source code in network_wrangler/utils/time.py
@validate_call\ndef dt_contains(timespan1: list[datetime], timespan2: list[datetime]) -> bool:\n    \"\"\"Check if one timespan inclusively contains another.\n\n    Args:\n        timespan1 (list[time]): The first timespan represented as a list containing the start\n            time and end time.\n        timespan2 (list[time]): The second timespan represented as a list containing the start\n            time and end time.\n\n    Returns:\n        bool: True if the first timespan contains the second timespan, False otherwise.\n    \"\"\"\n    start_time_dt, end_time_dt = timespan1\n    start_time_dt2, end_time_dt2 = timespan2\n    return (start_time_dt <= start_time_dt2) and (end_time_dt >= end_time_dt2)\n
"},{"location":"api/#network_wrangler.utils.time.dt_list_overlaps","title":"dt_list_overlaps(timespans)","text":"

Check if any of the timespans overlap.

overlapping: a timespan that fully or partially overlaps a given timespan. This includes and all timespans where at least one minute overlap.

Source code in network_wrangler/utils/time.py
@validate_call\ndef dt_list_overlaps(timespans: list[list[datetime]]) -> bool:\n    \"\"\"Check if any of the timespans overlap.\n\n    `overlapping`: a timespan that fully or partially overlaps a given timespan.\n    This includes and all timespans where at least one minute overlap.\n    \"\"\"\n    if filter_dt_list_to_overlaps(timespans):\n        return True\n    return False\n
"},{"location":"api/#network_wrangler.utils.time.dt_overlap_duration","title":"dt_overlap_duration(timedelta1, timedelta2)","text":"

Check if two timespans overlap and return the amount of overlap.

Source code in network_wrangler/utils/time.py
@validate_call\ndef dt_overlap_duration(timedelta1: timedelta, timedelta2: timedelta) -> timedelta:\n    \"\"\"Check if two timespans overlap and return the amount of overlap.\"\"\"\n    overlap_start = max(timedelta1.start_time, timedelta2.start_time)\n    overlap_end = min(timedelta1.end_time, timedelta2.end_time)\n    overlap_duration = max(overlap_end - overlap_start, timedelta(0))\n    return overlap_duration\n
"},{"location":"api/#network_wrangler.utils.time.dt_overlaps","title":"dt_overlaps(timespan1, timespan2)","text":"

Check if two timespans overlap.

overlapping: a timespan that fully or partially overlaps a given timespan. This includes and all timespans where at least one minute overlap.

Source code in network_wrangler/utils/time.py
@validate_call\ndef dt_overlaps(timespan1: list[datetime], timespan2: list[datetime]) -> bool:\n    \"\"\"Check if two timespans overlap.\n\n    `overlapping`: a timespan that fully or partially overlaps a given timespan.\n    This includes and all timespans where at least one minute overlap.\n    \"\"\"\n    if (timespan1[0] < timespan2[1]) and (timespan2[0] < timespan1[1]):\n        return True\n    return False\n
"},{"location":"api/#network_wrangler.utils.time.duration_dt","title":"duration_dt(start_time_dt, end_time_dt)","text":"

Returns a datetime.timedelta object representing the duration of the timespan.

If end_time is less than start_time, the duration will assume that it crosses over midnight.

Source code in network_wrangler/utils/time.py
def duration_dt(start_time_dt: datetime, end_time_dt: datetime) -> timedelta:\n    \"\"\"Returns a datetime.timedelta object representing the duration of the timespan.\n\n    If end_time is less than start_time, the duration will assume that it crosses over\n    midnight.\n    \"\"\"\n    if end_time_dt < start_time_dt:\n        return timedelta(\n            hours=24 - start_time_dt.hour + end_time_dt.hour,\n            minutes=end_time_dt.minute - start_time_dt.minute,\n            seconds=end_time_dt.second - start_time_dt.second,\n        )\n    else:\n        return end_time_dt - start_time_dt\n
"},{"location":"api/#network_wrangler.utils.time.filter_df_to_overlapping_timespans","title":"filter_df_to_overlapping_timespans(orig_df, query_timespan, strict_match=False, min_overlap_minutes=0, keep_max_of_cols=['model_link_id'])","text":"

Filters dataframe for entries that have maximum overlap with the given query timespan.

Parameters:

Name Type Description Default orig_df DataFrame

dataframe to query timespans for with start_time and end_time.

required query_timespan list[TimeString]

TimespanString of format [\u2018HH:MM\u2019,\u2019HH:MM\u2019] to query orig_df for overlapping records.

required strict_match bool

boolean indicating if the returned df should only contain records that fully contain the query timespan. If set to True, min_overlap_minutes does not apply. Defaults to False.

False min_overlap_minutes int

minimum number of minutes the timespans need to overlap to keep. Defaults to 0.

0 keep_max_of_cols list[str]

list of fields to return the maximum value of overlap for. If None, will return all overlapping time periods. Defaults to ['model_link_id']

['model_link_id'] Source code in network_wrangler/utils/time.py
def filter_df_to_overlapping_timespans(\n    orig_df: pd.DataFrame,\n    query_timespan: list[TimeString],\n    strict_match: bool = False,\n    min_overlap_minutes: int = 0,\n    keep_max_of_cols: list[str] = [\"model_link_id\"],\n) -> pd.DataFrame:\n    \"\"\"Filters dataframe for entries that have maximum overlap with the given query timespan.\n\n    Args:\n        orig_df: dataframe to query timespans for with `start_time` and `end_time`.\n        query_timespan: TimespanString of format ['HH:MM','HH:MM'] to query orig_df for overlapping\n            records.\n        strict_match: boolean indicating if the returned df should only contain\n            records that fully contain the query timespan. If set to True, min_overlap_minutes\n            does not apply. Defaults to False.\n        min_overlap_minutes: minimum number of minutes the timespans need to overlap to keep.\n            Defaults to 0.\n        keep_max_of_cols: list of fields to return the maximum value of overlap for.  If None,\n            will return all overlapping time periods. Defaults to `['model_link_id']`\n    \"\"\"\n    q_start, q_end = str_to_time_list(query_timespan)\n\n    overlap_start = orig_df[\"start_time\"].combine(q_start, max)\n    overlap_end = orig_df[\"end_time\"].combine(q_end, min)\n    orig_df[\"overlap_duration\"] = (overlap_end - overlap_start).dt.total_seconds() / 60\n\n    if strict_match:\n        overlap_df = orig_df.loc[(orig_df.start_time <= q_start) & (orig_df.end_time >= q_end)]\n    else:\n        overlap_df = orig_df.loc[orig_df.overlap_duration > min_overlap_minutes]\n    WranglerLogger.debug(f\"overlap_df: \\n{overlap_df}\")\n    if keep_max_of_cols:\n        # keep only the maximum overlap\n        idx = overlap_df.groupby(keep_max_of_cols)[\"overlap_duration\"].idxmax()\n        overlap_df = overlap_df.loc[idx]\n    return overlap_df\n
"},{"location":"api/#network_wrangler.utils.time.filter_dt_list_to_overlaps","title":"filter_dt_list_to_overlaps(timespans)","text":"

Filter a list of timespans to only include those that overlap.

overlapping: a timespan that fully or partially overlaps a given timespan. This includes and all timespans where at least one minute overlap.

Source code in network_wrangler/utils/time.py
@validate_call\ndef filter_dt_list_to_overlaps(timespans: list[list[datetime]]) -> list[list[datetime]]:\n    \"\"\"Filter a list of timespans to only include those that overlap.\n\n    `overlapping`: a timespan that fully or partially overlaps a given timespan.\n    This includes and all timespans where at least one minute overlap.\n    \"\"\"\n    overlaps = []\n    for i in range(len(timespans)):\n        for j in range(i + 1, len(timespans)):\n            if dt_overlaps(timespans[i], timespans[j]):\n                overlaps += [timespans[i], timespans[j]]\n\n    # remove dupes\n    overlaps = list(map(list, set(map(tuple, overlaps))))\n    return overlaps\n
"},{"location":"api/#network_wrangler.utils.time.format_time","title":"format_time(seconds)","text":"

Formats seconds into a human-friendly string for log files.

Source code in network_wrangler/utils/time.py
def format_time(seconds):\n    \"\"\"Formats seconds into a human-friendly string for log files.\"\"\"\n    if seconds < 60:\n        return f\"{int(seconds)} seconds\"\n    elif seconds < 3600:\n        return f\"{int(seconds // 60)} minutes\"\n    else:\n        hours = int(seconds // 3600)\n        minutes = int((seconds % 3600) // 60)\n        return f\"{hours} hours and {minutes} minutes\"\n
"},{"location":"api/#network_wrangler.utils.time.str_to_time","title":"str_to_time(time_str)","text":"

Convert TimeString (HH:MM<:SS>) to datetime.time object.

Source code in network_wrangler/utils/time.py
def str_to_time(time_str: TimeString) -> datetime:\n    \"\"\"Convert TimeString (HH:MM<:SS>) to datetime.time object.\"\"\"\n    n_days = 0\n    # Convert to the next day\n    hours, min_sec = time_str.split(\":\", 1)\n    if int(hours) >= 24:\n        n_days, hour_of_day = divmod(int(hours), 24)\n        time_str = f\"{hour_of_day}:{min_sec}\"  # noqa E231\n\n    if len(time_str.split(\":\")) == 2:\n        base_time = datetime.strptime(time_str, \"%H:%M\")\n    elif len(time_str.split(\":\")) == 3:\n        base_time = datetime.strptime(time_str, \"%H:%M:%S\")\n    else:\n        from ..time import TimeFormatError\n\n        raise TimeFormatError(\"time strings must be in the format HH:MM or HH:MM:SS\")\n\n    total_time = base_time\n    if n_days > 0:\n        total_time = base_time + timedelta(days=n_days)\n    return total_time\n
"},{"location":"api/#network_wrangler.utils.time.str_to_time_list","title":"str_to_time_list(timespan)","text":"

Convert list of TimeStrings (HH:MM<:SS>) to list of datetime.time objects.

Source code in network_wrangler/utils/time.py
def str_to_time_list(timespan: list[TimeString]) -> list[list[datetime]]:\n    \"\"\"Convert list of TimeStrings (HH:MM<:SS>) to list of datetime.time objects.\"\"\"\n    return list(map(str_to_time, timespan))\n
"},{"location":"api/#network_wrangler.utils.time.timespan_str_list_to_dt","title":"timespan_str_list_to_dt(timespans)","text":"

Convert list of TimespanStrings to list of datetime.time objects.

Source code in network_wrangler/utils/time.py
def timespan_str_list_to_dt(timespans: list[TimespanString]) -> list[list[datetime]]:\n    \"\"\"Convert list of TimespanStrings to list of datetime.time objects.\"\"\"\n    [str_to_time_list(ts) for ts in timespans]\n
"},{"location":"api/#network_wrangler.utils.data.InvalidJoinFieldError","title":"InvalidJoinFieldError","text":"

Bases: Exception

Raised when the join field is not unique.

Source code in network_wrangler/utils/data.py
class InvalidJoinFieldError(Exception):\n    \"\"\"Raised when the join field is not unique.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.data.MissingPropertiesError","title":"MissingPropertiesError","text":"

Bases: Exception

Raised when properties are missing from the dataframe.

Source code in network_wrangler/utils/data.py
class MissingPropertiesError(Exception):\n    \"\"\"Raised when properties are missing from the dataframe.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.data.attach_parameters_to_df","title":"attach_parameters_to_df(df, params)","text":"

Attatch params as a dataframe attribute which will be copied with dataframe.

Source code in network_wrangler/utils/data.py
def attach_parameters_to_df(df: pd.DataFrame, params) -> pd.DataFrame:\n    \"\"\"Attatch params as a dataframe attribute which will be copied with dataframe.\"\"\"\n    if not df.__dict__.get(\"params\"):\n        df.__dict__[\"params\"] = params\n        # need to add params to _metadata in order to make sure it is copied.\n        # see: https://stackoverflow.com/questions/50372509/\n        df._metadata += [\"params\"]\n    # WranglerLogger.debug(f\"DFParams: {df.params}\")\n    return df\n
"},{"location":"api/#network_wrangler.utils.data.coerce_dict_to_df_types","title":"coerce_dict_to_df_types(d, df, skip_keys=[], return_skipped=False)","text":"

Coerce dictionary values to match the type of a dataframe columns matching dict keys.

Will also coerce a list of values.

Parameters:

Name Type Description Default d dict

dictionary to coerce with singleton or list values

required df DataFrame

dataframe to get types from

required skip_keys list

list of dict keys to skip. Defaults to []/

[] return_skipped bool

keep the uncoerced, skipped keys/vals in the resulting dict. Defaults to False.

False

Returns:

Name Type Description dict dict

dict with coerced types

Source code in network_wrangler/utils/data.py
def coerce_dict_to_df_types(\n    d: dict, df: pd.DataFrame, skip_keys: list = [], return_skipped: bool = False\n) -> dict:\n    \"\"\"Coerce dictionary values to match the type of a dataframe columns matching dict keys.\n\n    Will also coerce a list of values.\n\n    Args:\n        d (dict): dictionary to coerce with singleton or list values\n        df (pd.DataFrame): dataframe to get types from\n        skip_keys: list of dict keys to skip. Defaults to []/\n        return_skipped: keep the uncoerced, skipped keys/vals in the resulting dict.\n            Defaults to False.\n\n    Returns:\n        dict: dict with coerced types\n    \"\"\"\n    coerced_dict = {}\n    for k, vals in d.items():\n        if k in skip_keys:\n            if return_skipped:\n                coerced_dict[k] = vals\n            continue\n        if k not in df.columns:\n            raise ValueError(f\"Key {k} not in dataframe columns.\")\n        if pd.api.types.infer_dtype(df[k]) == \"integer\":\n            if isinstance(vals, list):\n                coerced_v = [int(float(v)) for v in vals]\n            else:\n                coerced_v = int(float(vals))\n        elif pd.api.types.infer_dtype(df[k]) == \"floating\":\n            if isinstance(vals, list):\n                coerced_v = [float(v) for v in vals]\n            else:\n                coerced_v = float(vals)\n        elif pd.api.types.infer_dtype(df[k]) == \"boolean\":\n            if isinstance(vals, list):\n                coerced_v = [bool(v) for v in vals]\n            else:\n                coerced_v = bool(vals)\n        else:\n            if isinstance(vals, list):\n                coerced_v = [str(v) for v in vals]\n            else:\n                coerced_v = str(vals)\n        coerced_dict[k] = coerced_v\n    return coerced_dict\n
"},{"location":"api/#network_wrangler.utils.data.coerce_gdf","title":"coerce_gdf(df, geometry=None, in_crs=LAT_LON_CRS)","text":"

Coerce a DataFrame to a GeoDataFrame, optionally with a new geometry.

Source code in network_wrangler/utils/data.py
def coerce_gdf(\n    df: pd.DataFrame, geometry: GeoSeries = None, in_crs: int = LAT_LON_CRS\n) -> GeoDataFrame:\n    \"\"\"Coerce a DataFrame to a GeoDataFrame, optionally with a new geometry.\"\"\"\n    if isinstance(df, GeoDataFrame):\n        if df.crs is None:\n            df.crs = in_crs\n        return df\n    p = None\n    if \"params\" in df.__dict__:\n        p = copy.deepcopy(df.params)\n\n    if \"geometry\" not in df and geometry is None:\n        raise ValueError(\"Must give geometry argument if don't have Geometry in dataframe\")\n\n    geometry = geometry if geometry is not None else df[\"geometry\"]\n    if not isinstance(geometry, GeoSeries):\n        try:\n            geometry = GeoSeries(geometry)\n        except:  # noqa: E722\n            geometry = geometry.apply(wkt.loads)\n    df = GeoDataFrame(df, geometry=geometry, crs=in_crs)\n\n    if p is not None:\n        # GeoPandas seems to lose parameters if we don't re-attach them.\n        df.__dict__[\"params\"] = p\n    return df\n
"},{"location":"api/#network_wrangler.utils.data.coerce_val_to_df_types","title":"coerce_val_to_df_types(field, val, df)","text":"

Coerce field value to match the type of a matching dataframe columns.

Parameters:

Name Type Description Default field str

field to lookup

required val Union[str, int, float, bool, list[Union[str, int, float, bool]]]

value or list of values to coerce

required df DataFrame

dataframe to get types from

required Source code in network_wrangler/utils/data.py
def coerce_val_to_df_types(\n    field: str,\n    val: Union[str, int, float, bool, list[Union[str, int, float, bool]]],\n    df: pd.DataFrame,\n) -> dict:\n    \"\"\"Coerce field value to match the type of a matching dataframe columns.\n\n    Args:\n        field: field to lookup\n        val: value or list of values to coerce\n        df (pd.DataFrame): dataframe to get types from\n\n    Returns: coerced value or list of values\n    \"\"\"\n    if field not in df.columns:\n        raise ValueError(f\"Field {field} not in dataframe columns.\")\n    if pd.api.types.infer_dtype(df[field]) == \"integer\":\n        if isinstance(val, list):\n            return [int(float(v)) for v in val]\n        return int(float(val))\n    elif pd.api.types.infer_dtype(df[field]) == \"floating\":\n        if isinstance(val, list):\n            return [float(v) for v in val]\n        return float(val)\n    elif pd.api.types.infer_dtype(df[field]) == \"boolean\":\n        if isinstance(val, list):\n            return [bool(v) for v in val]\n        return bool(val)\n    else:\n        if isinstance(val, list):\n            return [str(v) for v in val]\n        return str(val)\n
"},{"location":"api/#network_wrangler.utils.data.coerce_val_to_series_type","title":"coerce_val_to_series_type(val, s)","text":"

Coerces a value to match type of pandas series.

Will try not to fail so if you give it a value that can\u2019t convert to a number, it will return a string.

Parameters:

Name Type Description Default val

Any type of singleton value

required s Series

series to match the type to

required Source code in network_wrangler/utils/data.py
def coerce_val_to_series_type(val, s: pd.Series):\n    \"\"\"Coerces a value to match type of pandas series.\n\n    Will try not to fail so if you give it a value that can't convert to a number, it will\n    return a string.\n\n    Args:\n        val: Any type of singleton value\n        s (pd.Series): series to match the type to\n    \"\"\"\n    # WranglerLogger.debug(f\"Input val: {val} of type {type(val)} to match with series type \\\n    #    {pd.api.types.infer_dtype(s)}.\")\n    if pd.api.types.infer_dtype(s) in [\"integer\", \"floating\"]:\n        try:\n            v = float(val)\n        except:  # noqa: E722\n            v = str(val)\n    elif pd.api.types.infer_dtype(s) == \"boolean\":\n        v = bool(val)\n    else:\n        v = str(val)\n    # WranglerLogger.debug(f\"Return value: {v}\")\n    return v\n
"},{"location":"api/#network_wrangler.utils.data.compare_df_values","title":"compare_df_values(df1, df2, join_col=None, ignore=[], atol=1e-05)","text":"

Compare overlapping part of dataframes and returns where there are differences.

Source code in network_wrangler/utils/data.py
def compare_df_values(df1, df2, join_col: str = None, ignore: list[str] = [], atol=1e-5):\n    \"\"\"Compare overlapping part of dataframes and returns where there are differences.\"\"\"\n    comp_c = [\n        c\n        for c in df1.columns\n        if c in df2.columns and c not in ignore and not isinstance(df1[c], GeoSeries)\n    ]\n    if join_col is None:\n        comp_df = df1[comp_c].merge(\n            df2[comp_c],\n            how=\"inner\",\n            right_index=True,\n            left_index=True,\n            suffixes=[\"_a\", \"_b\"],\n        )\n    else:\n        comp_df = df1[comp_c].merge(df2[comp_c], how=\"inner\", on=join_col, suffixes=[\"_a\", \"_b\"])\n\n    # Filter columns by data type\n    numeric_cols = [col for col in comp_c if np.issubdtype(df1[col].dtype, np.number)]\n    ll_cols = list_like_columns(df1)\n    other_cols = [col for col in comp_c if col not in numeric_cols and col not in ll_cols]\n\n    # For numeric columns, use np.isclose\n    if numeric_cols:\n        numeric_a = comp_df[[f\"{col}_a\" for col in numeric_cols]]\n        numeric_b = comp_df[[f\"{col}_b\" for col in numeric_cols]]\n        is_close = np.isclose(numeric_a, numeric_b, atol=atol, equal_nan=True)\n        comp_df[numeric_cols] = ~is_close\n\n    if ll_cols:\n        for ll_c in ll_cols:\n            comp_df[ll_c] = diff_list_like_series(comp_df[ll_c + \"_a\"], comp_df[ll_c + \"_b\"])\n\n    # For non-numeric columns, use direct comparison\n    if other_cols:\n        for col in other_cols:\n            comp_df[col] = (comp_df[f\"{col}_a\"] != comp_df[f\"{col}_b\"]) & ~(\n                comp_df[f\"{col}_a\"].isna() & comp_df[f\"{col}_b\"].isna()\n            )\n\n    # Filter columns and rows where no differences\n    cols_w_diffs = [col for col in comp_c if comp_df[col].any()]\n    out_cols = [col for subcol in cols_w_diffs for col in (f\"{subcol}_a\", f\"{subcol}_b\", subcol)]\n    comp_df = comp_df[out_cols]\n    comp_df = comp_df.loc[comp_df[cols_w_diffs].any(axis=1)]\n\n    return comp_df\n
"},{"location":"api/#network_wrangler.utils.data.dict_fields_in_df","title":"dict_fields_in_df(d, df)","text":"

Check if all fields in dict are in dataframe.

Source code in network_wrangler/utils/data.py
def dict_fields_in_df(d: dict, df: pd.DataFrame) -> bool:\n    \"\"\"Check if all fields in dict are in dataframe.\"\"\"\n    missing_fields = [f for f in d.keys() if f not in df.columns]\n    if missing_fields:\n        WranglerLogger.error(f\"Fields in dictionary missing from dataframe: {missing_fields}.\")\n        raise ValueError(f\"Fields in dictionary missing from dataframe: {missing_fields}.\")\n    return True\n
"},{"location":"api/#network_wrangler.utils.data.dict_to_query","title":"dict_to_query(selection_dict)","text":"

Generates the query of from selection_dict.

Parameters:

Name Type Description Default selection_dict Mapping[str, Any]

selection dictionary

required

Returns:

Name Type Description _type_ str

Query value

Source code in network_wrangler/utils/data.py
def dict_to_query(\n    selection_dict: Mapping[str, Any],\n) -> str:\n    \"\"\"Generates the query of from selection_dict.\n\n    Args:\n        selection_dict: selection dictionary\n\n    Returns:\n        _type_: Query value\n    \"\"\"\n    WranglerLogger.debug(\"Building selection query\")\n\n    def _kv_to_query_part(k, v, _q_part=\"\"):\n        if isinstance(v, list):\n            _q_part += \"(\" + \" or \".join([_kv_to_query_part(k, i) for i in v]) + \")\"\n            return _q_part\n        if isinstance(v, str):\n            return k + '.str.contains(\"' + v + '\")'\n        else:\n            return k + \"==\" + str(v)\n\n    query = \"(\" + \" and \".join([_kv_to_query_part(k, v) for k, v in selection_dict.items()]) + \")\"\n    WranglerLogger.debug(f\"Selection query: \\n{query}\")\n    return query\n
"},{"location":"api/#network_wrangler.utils.data.diff_dfs","title":"diff_dfs(df1, df2, ignore=[])","text":"

Compare two dataframes and log differences.

Source code in network_wrangler/utils/data.py
def diff_dfs(df1, df2, ignore: list[str] = []) -> bool:\n    \"\"\"Compare two dataframes and log differences.\"\"\"\n    diff = False\n    if set(df1.columns) != set(df2.columns):\n        WranglerLogger.warning(\n            f\" Columns are different 1vs2 \\n    {set(df1.columns) ^ set(df2.columns)}\"\n        )\n        common_cols = [col for col in df1.columns if col in df2.columns]\n        df1 = df1[common_cols]\n        df2 = df2[common_cols]\n        diff = True\n\n    cols_to_compare = [col for col in df1.columns if col not in ignore]\n    df1 = df1[cols_to_compare]\n    df2 = df2[cols_to_compare]\n\n    if len(df1) != len(df2):\n        WranglerLogger.warning(\n            f\" Length is different /\" f\"DF1: {len(df1)} vs /\" f\"DF2: {len(df2)}\\n /\"\n        )\n        diff = True\n\n    diff_df = compare_df_values(df1, df2)\n\n    if not diff_df.empty:\n        WranglerLogger.error(f\"!!! Differences dfs: \\n{diff_df}\")\n        return True\n\n    if not diff:\n        WranglerLogger.info(\"...no differences in df found.\")\n    return diff\n
"},{"location":"api/#network_wrangler.utils.data.diff_list_like_series","title":"diff_list_like_series(s1, s2)","text":"

Compare two series that contain list-like items as strings.

Source code in network_wrangler/utils/data.py
def diff_list_like_series(s1, s2) -> bool:\n    \"\"\"Compare two series that contain list-like items as strings.\"\"\"\n    diff_df = pd.concat([s1, s2], axis=1, keys=[\"s1\", \"s2\"])\n    diff_df[\"diff\"] = diff_df.apply(lambda x: str(x[\"s1\"]) != str(x[\"s2\"]), axis=1)\n\n    if diff_df[\"diff\"].any():\n        WranglerLogger.info(\"List-Like differences:\")\n        WranglerLogger.info(diff_df)\n        return True\n    return False\n
"},{"location":"api/#network_wrangler.utils.data.fk_in_pk","title":"fk_in_pk(pk, fk, ignore_nan=True)","text":"

Check if all foreign keys are in the primary keys, optionally ignoring NaN.

Source code in network_wrangler/utils/data.py
def fk_in_pk(\n    pk: Union[pd.Series, list], fk: Union[pd.Series, list], ignore_nan: bool = True\n) -> Tuple[bool, list]:\n    \"\"\"Check if all foreign keys are in the primary keys, optionally ignoring NaN.\"\"\"\n    if isinstance(fk, list):\n        fk = pd.Series(fk)\n\n    if ignore_nan:\n        fk = fk.dropna()\n\n    missing_flag = ~fk.isin(pk)\n\n    if missing_flag.any():\n        WranglerLogger.warning(\n            f\"Following keys referenced in {fk.name} but missing in\\\n            primary key table: \\n{fk[missing_flag]} \"\n        )\n        return False, fk[missing_flag].tolist()\n\n    return True, []\n
"},{"location":"api/#network_wrangler.utils.data.list_like_columns","title":"list_like_columns(df, item_type=None)","text":"

Find columns in a dataframe that contain list-like items that can\u2019t be json-serialized.

Parameters:

Name Type Description Default df

dataframe to check

required item_type type

if not None, will only return columns where all items are of this type by checking only the first item in the column. Defaults to None.

None Source code in network_wrangler/utils/data.py
def list_like_columns(df, item_type: type = None) -> list[str]:\n    \"\"\"Find columns in a dataframe that contain list-like items that can't be json-serialized.\n\n    Args:\n        df: dataframe to check\n        item_type: if not None, will only return columns where all items are of this type by\n            checking **only** the first item in the column.  Defaults to None.\n    \"\"\"\n    list_like_columns = []\n\n    for column in df.columns:\n        if df[column].apply(lambda x: isinstance(x, (list, ndarray))).any():\n            if item_type is not None:\n                if not isinstance(df[column].iloc[0], item_type):\n                    continue\n            list_like_columns.append(column)\n    return list_like_columns\n
"},{"location":"api/#network_wrangler.utils.data.segment_data_by_selection","title":"segment_data_by_selection(item_list, data, field=None, end_val=0)","text":"

Segment a dataframe or series into before, middle, and end segments based on item_list.

selected segment = everything from the first to last item in item_list inclusive of the first and last items. Before segment = everything before After segment = everything after

Parameters:

Name Type Description Default item_list list

List of items to segment data by. If longer than two, will only use the first and last items.

required data Union[Series, DataFrame]

Data to segment into before, middle, and after.

required field str

If a dataframe, specifies which field to reference. Defaults to None.

None end_val int

Notation for util the end or from the begining. Defaults to 0.

0

Raises:

Type Description ValueError

If item list isn\u2019t found in data in correct order.

Returns:

Name Type Description tuple tuple[Union[Series, list, DataFrame]]

data broken out by beofore, selected segment, and after.

Source code in network_wrangler/utils/data.py
def segment_data_by_selection(\n    item_list: list,\n    data: Union[list, pd.DataFrame, pd.Series],\n    field: str = None,\n    end_val=0,\n) -> tuple[Union[pd.Series, list, pd.DataFrame]]:\n    \"\"\"Segment a dataframe or series into before, middle, and end segments based on item_list.\n\n    selected segment = everything from the first to last item in item_list inclusive of the first\n        and last items.\n    Before segment = everything before\n    After segment = everything after\n\n\n    Args:\n        item_list (list): List of items to segment data by. If longer than two, will only\n            use the first and last items.\n        data (Union[pd.Series, pd.DataFrame]): Data to segment into before, middle, and after.\n        field (str, optional): If a dataframe, specifies which field to reference.\n            Defaults to None.\n        end_val (int, optional): Notation for util the end or from the begining. Defaults to 0.\n\n    Raises:\n        ValueError: If item list isn't found in data in correct order.\n\n    Returns:\n        tuple: data broken out by beofore, selected segment, and after.\n    \"\"\"\n    ref_data = data\n    if isinstance(data, pd.DataFrame):\n        ref_data = data[field].tolist()\n    elif isinstance(data, pd.Series):\n        ref_data = data.tolist()\n\n    # ------- Replace \"to the end\" indicators with first or last value --------\n    start_item, end_item = item_list[0], item_list[-1]\n    if start_item == end_val:\n        start_item = ref_data[0]\n    if end_item == end_val:\n        end_item = ref_data[-1]\n\n    # --------Find the start and end indices -----------------------------------\n    start_idxs = list(set([i for i, item in enumerate(ref_data) if item == start_item]))\n    if not start_idxs:\n        raise ValueError(f\"Segment start item: {start_item} not in data.\")\n    if len(start_idxs) > 1:\n        WranglerLogger.warning(\n            f\"Found multiple starting locations for data segment: {start_item}.\\\n                                Choosing first \u2013 largest segment being selected.\"\n        )\n    start_idx = min(start_idxs)\n\n    # find the end node starting from the start index.\n    end_idxs = [i + start_idx for i, item in enumerate(ref_data[start_idx:]) if item == end_item]\n    # WranglerLogger.debug(f\"End indexes: {end_idxs}\")\n    if not end_idxs:\n        raise ValueError(f\"Segment end item: {end_item} not in data after starting idx.\")\n    if len(end_idxs) > 1:\n        WranglerLogger.warning(\n            f\"Found multiple ending locations for data segment: {end_item}.\\\n                                Choosing last \u2013 largest segment being selected.\"\n        )\n    end_idx = max(end_idxs) + 1\n    # WranglerLogger.debug(\n    # f\"Segmenting data fr {start_item} idx:{start_idx} to {end_item} idx:{end_idx}.\\n{ref_data}\")\n    # -------- Extract the segments --------------------------------------------\n    if isinstance(data, pd.DataFrame):\n        before_segment = data.iloc[:start_idx]\n        selected_segment = data.iloc[start_idx:end_idx]\n        after_segment = data.iloc[end_idx:]\n    else:\n        before_segment = data[:start_idx]\n        selected_segment = data[start_idx:end_idx]\n        after_segment = data[end_idx:]\n\n    if isinstance(data, pd.Series) or isinstance(data, pd.DataFrame):\n        before_segment = before_segment.reset_index(drop=True)\n        selected_segment = selected_segment.reset_index(drop=True)\n        after_segment = after_segment.reset_index(drop=True)\n\n    # WranglerLogger.debug(f\"Segmented data into before, selected, and after.\\n \\\n    #    Before:\\n{before_segment}\\nSelected:\\n{selected_segment}\\nAfter:\\n{after_segment}\")\n\n    return before_segment, selected_segment, after_segment\n
"},{"location":"api/#network_wrangler.utils.data.segment_data_by_selection_min_overlap","title":"segment_data_by_selection_min_overlap(selection_list, data, field, replacements_list, end_val=0)","text":"

Segments data based on item_list reducing overlap with replacement list.

selected segment: everything from the first to last item in item_list inclusive of the first and last items but not if first and last items overlap with replacement list. Before segment = everything before After segment = everything after

Example: selection_list = [2,5] data = pd.DataFrame({\u201ci\u201d:[1,2,3,4,5,6]}) field = \u201ci\u201d replacements_list = [2,22,33]

Returns:

Type Description list

[22,33]

tuple[Union[Series, list, DataFrame]]

[1], [2,3,4,5], [6]

Parameters:

Name Type Description Default selection_list list

List of items to segment data by. If longer than two, will only use the first and last items.

required data Union[Series, DataFrame]

Data to segment into before, middle, and after.

required field str

Specifies which field to reference.

required replacements_list list

List of items to eventually replace the selected segment with.

required end_val int

Notation for util the end or from the begining. Defaults to 0.

0

tuple containing:

Type Description list tuple[Union[Series, list, DataFrame]] Source code in network_wrangler/utils/data.py
def segment_data_by_selection_min_overlap(\n    selection_list: list,\n    data: pd.DataFrame,\n    field: str,\n    replacements_list: list,\n    end_val=0,\n) -> tuple[list, tuple[Union[pd.Series, list, pd.DataFrame]]]:\n    \"\"\"Segments data based on item_list reducing overlap with replacement list.\n\n    *selected segment*: everything from the first to last item in item_list inclusive of the first\n        and last items but not if first and last items overlap with replacement list.\n    Before segment = everything before\n    After segment = everything after\n\n    Example:\n    selection_list = [2,5]\n    data = pd.DataFrame({\"i\":[1,2,3,4,5,6]})\n    field = \"i\"\n    replacements_list = [2,22,33]\n\n    returns:\n        [22,33]\n        [1], [2,3,4,5], [6]\n\n    Args:\n        selection_list (list): List of items to segment data by. If longer than two, will only\n            use the first and last items.\n        data (Union[pd.Series, pd.DataFrame]): Data to segment into before, middle, and after.\n        field (str): Specifies which field to reference.\n        replacements_list (list): List of items to eventually replace the selected segment with.\n        end_val (int, optional): Notation for util the end or from the begining. Defaults to 0.\n\n    Returns: tuple containing:\n        - updated replacement_list\n        - tuple of before, selected segment, and after data\n    \"\"\"\n    before_segment, segment_df, after_segment = segment_data_by_selection(\n        selection_list, data, field=field, end_val=end_val\n    )\n\n    if replacements_list[0] == segment_df[field].iat[0]:\n        # move first item from selected segment to the before_segment df\n        replacements_list = replacements_list[1:]\n        before_segment = pd.concat(\n            [before_segment, segment_df.iloc[:1]], ignore_index=True, sort=False\n        )\n        segment_df = segment_df.iloc[1:]\n        WranglerLogger.debug(f\"item start overlaps with replacement. Repl: {replacements_list}\")\n    if replacements_list and replacements_list[-1] == data[field].iat[-1]:\n        # move last item from selected segment to the after_segment df\n        replacements_list = replacements_list[:-1]\n        after_segment = pd.concat([data.iloc[-1:], after_segment], ignore_index=True, sort=False)\n        segment_df = segment_df.iloc[:-1]\n        WranglerLogger.debug(f\"item end overlaps with replacement. Repl: {replacements_list}\")\n\n    return replacements_list, (before_segment, segment_df, after_segment)\n
"},{"location":"api/#network_wrangler.utils.data.update_df_by_col_value","title":"update_df_by_col_value(destination_df, source_df, join_col, properties=None, fail_if_missing=True)","text":"

Updates destination_df with ALL values in source_df for specified props with same join_col.

Source_df can contain a subset of IDs of destination_df. If fail_if_missing is true, destination_df must have all the IDS in source DF - ensuring all source_df values are contained in resulting df.

>> destination_df\ntrip_id  property1  property2\n1         10      100\n2         20      200\n3         30      300\n4         40      400\n\n>> source_df\ntrip_id  property1  property2\n2         25      250\n3         35      350\n\n>> updated_df\ntrip_id  property1  property2\n0        1       10      100\n1        2       25      250\n2        3       35      350\n3        4       40      400\n

Parameters:

Name Type Description Default destination_df DataFrame

Dataframe to modify.

required source_df DataFrame

Dataframe with updated columns

required join_col str

column to join on

required properties list[str]

List of properties to use. If None, will default to all in source_df.

None fail_if_missing bool

If True, will raise an error if there are missing IDs in destination_df that exist in source_df.

True Source code in network_wrangler/utils/data.py
def update_df_by_col_value(\n    destination_df: pd.DataFrame,\n    source_df: pd.DataFrame,\n    join_col: str,\n    properties: list[str] = None,\n    fail_if_missing: bool = True,\n) -> pd.DataFrame:\n    \"\"\"Updates destination_df with ALL values in source_df for specified props with same join_col.\n\n    Source_df can contain a subset of IDs of destination_df.\n    If fail_if_missing is true, destination_df must have all\n    the IDS in source DF - ensuring all source_df values are contained in resulting df.\n\n    ```\n    >> destination_df\n    trip_id  property1  property2\n    1         10      100\n    2         20      200\n    3         30      300\n    4         40      400\n\n    >> source_df\n    trip_id  property1  property2\n    2         25      250\n    3         35      350\n\n    >> updated_df\n    trip_id  property1  property2\n    0        1       10      100\n    1        2       25      250\n    2        3       35      350\n    3        4       40      400\n    ```\n\n    Args:\n        destination_df (pd.DataFrame): Dataframe to modify.\n        source_df (pd.DataFrame): Dataframe with updated columns\n        join_col (str): column to join on\n        properties (list[str]): List of properties to use. If None, will default to all\n            in source_df.\n        fail_if_missing (bool): If True, will raise an error if there are missing IDs in\n            destination_df that exist in source_df.\n    \"\"\"\n    # 1. Identify which properties should be updated; and if they exist in both DFs.\n    if properties is None:\n        properties = [\n            c for c in source_df.columns if c in destination_df.columns and c != join_col\n        ]\n    else:\n        _dest_miss = _df_missing_cols(destination_df, properties + [join_col])\n        if _dest_miss:\n            raise MissingPropertiesError(f\"Properties missing from destination_df: {_dest_miss}\")\n        _source_miss = _df_missing_cols(source_df, properties + [join_col])\n        if _source_miss:\n            raise MissingPropertiesError(f\"Properties missing from source_df: {_source_miss}\")\n\n    # 2. Identify if there are IDs missing from destintation_df that exist in source_df\n    if fail_if_missing:\n        missing_ids = set(source_df[join_col]) - set(destination_df[join_col])\n        if missing_ids:\n            raise InvalidJoinFieldError(f\"IDs missing from source_df: \\n{missing_ids}\")\n\n    WranglerLogger.debug(f\"Updating properties for {len(source_df)} records: {properties}.\")\n\n    if not source_df[join_col].is_unique:\n        InvalidJoinFieldError(\"Can't join from source_df when join_col: {join_col} is not unique.\")\n\n    if not destination_df[join_col].is_unique:\n        return _update_props_from_one_to_many(destination_df, source_df, join_col, properties)\n\n    return _update_props_for_common_idx(destination_df, source_df, join_col, properties)\n
"},{"location":"api/#network_wrangler.utils.data.validate_existing_value_in_df","title":"validate_existing_value_in_df(df, idx, field, expected_value)","text":"

Validate if df[field]==expected_value for all indices in idx.

Source code in network_wrangler/utils/data.py
def validate_existing_value_in_df(df: pd.DataFrame, idx: list[int], field: str, expected_value):\n    \"\"\"Validate if df[field]==expected_value for all indices in idx.\"\"\"\n    if field not in df.columns:\n        WranglerLogger.warning(f\"!! {field} Not an existing field.\")\n        return False\n    if not df.loc[idx, field].eq(expected_value).all():\n        WranglerLogger.warning(\n            f\"Existing value defined for {field} in project card \\\n            does not match the value in the selection links. \\n\\\n            Specified Existing: {expected_value}\\n\\\n            Actual Existing: \\n {df.loc[idx, field]}.\"\n        )\n        return False\n    return True\n
"},{"location":"api/#network_wrangler.utils.geo.InvalidCRSError","title":"InvalidCRSError","text":"

Bases: Exception

Raised when a point is not valid for a given coordinate reference system.

Source code in network_wrangler/utils/geo.py
class InvalidCRSError(Exception):\n    \"\"\"Raised when a point is not valid for a given coordinate reference system.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.geo.MissingNodesError","title":"MissingNodesError","text":"

Bases: Exception

Raised when referenced nodes are missing from the network.

Source code in network_wrangler/utils/geo.py
class MissingNodesError(Exception):\n    \"\"\"Raised when referenced nodes are missing from the network.\"\"\"\n\n    pass\n
"},{"location":"api/#network_wrangler.utils.geo.check_point_valid_for_crs","title":"check_point_valid_for_crs(point, crs)","text":"

Check if a point is valid for a given coordinate reference system.

Parameters:

Name Type Description Default point Point

Shapely Point

required crs int

coordinate reference system in ESPG code

required Source code in network_wrangler/utils/geo.py
def check_point_valid_for_crs(point: Point, crs: int):\n    \"\"\"Check if a point is valid for a given coordinate reference system.\n\n    Args:\n        point: Shapely Point\n        crs: coordinate reference system in ESPG code\n\n    raises: InvalidCRSError if point is not valid for the given crs\n    \"\"\"\n    crs = CRS.from_user_input(crs)\n    minx, miny, maxx, maxy = crs.area_of_use.bounds\n    ok_bounds = True\n    if not minx <= point.x <= maxx:\n        WranglerLogger.error(f\"Invalid X coordinate for CRS {crs}: {point.x}\")\n        ok_bounds = False\n    if not miny <= point.y <= maxy:\n        WranglerLogger.error(f\"Invalid Y coordinate for CRS {crs}: {point.y}\")\n        ok_bounds = False\n\n    if not ok_bounds:\n        raise InvalidCRSError(f\"Invalid coordinate for CRS {crs}: {point.x}, {point.y}\")\n
"},{"location":"api/#network_wrangler.utils.geo.get_bearing","title":"get_bearing(lat1, lon1, lat2, lon2)","text":"

Calculate the bearing (forward azimuth) b/w the two points.

returns: bearing in radians

Source code in network_wrangler/utils/geo.py
def get_bearing(lat1, lon1, lat2, lon2):\n    \"\"\"Calculate the bearing (forward azimuth) b/w the two points.\n\n    returns: bearing in radians\n    \"\"\"\n    # bearing in degrees\n    brng = Geodesic.WGS84.Inverse(lat1, lon1, lat2, lon2)[\"azi1\"]\n\n    # convert bearing to radians\n    brng = math.radians(brng)\n\n    return brng\n
"},{"location":"api/#network_wrangler.utils.geo.get_bounding_polygon","title":"get_bounding_polygon(boundary_geocode=None, boundary_file=None, boundary_gdf=None, crs=LAT_LON_CRS)","text":"

Get the bounding polygon for a given boundary first prioritizing the.

This function retrieves the bounding polygon for a given boundary. The boundary can be provided as a GeoDataFrame, a geocode string or dictionary, or a boundary file. The resulting polygon geometry is returned as a GeoSeries.

Parameters:

Name Type Description Default boundary_geocode Union[str, dict]

A geocode string or dictionary representing the boundary. Defaults to None.

None boundary_file Union[str, Path]

A path to the boundary file. Only used if boundary_geocode is None. Defaults to None.

None boundary_gdf GeoDataFrame

A GeoDataFrame representing the boundary. Only used if boundary_geocode and boundary_file are None. Defaults to None.

None crs int

The coordinate reference system (CRS) code. Defaults to 4326 (WGS84).

LAT_LON_CRS

Returns:

Type Description GeoSeries

gpd.GeoSeries: The polygon geometry representing the bounding polygon.

Source code in network_wrangler/utils/geo.py
def get_bounding_polygon(\n    boundary_geocode: Optional[Union[str, dict]] = None,\n    boundary_file: Optional[Union[str, Path]] = None,\n    boundary_gdf: Optional[gpd.GeoDataFrame] = None,\n    crs: int = LAT_LON_CRS,  # WGS84\n) -> gpd.GeoSeries:\n    \"\"\"Get the bounding polygon for a given boundary first prioritizing the.\n\n    This function retrieves the bounding polygon for a given boundary. The boundary can be provided\n    as a GeoDataFrame, a geocode string or dictionary, or a boundary file. The resulting polygon\n    geometry is returned as a GeoSeries.\n\n    Args:\n        boundary_geocode (Union[str, dict], optional): A geocode string or dictionary\n            representing the boundary. Defaults to None.\n        boundary_file (Union[str, Path], optional): A path to the boundary file. Only used if\n            boundary_geocode is None. Defaults to None.\n        boundary_gdf (gpd.GeoDataFrame, optional): A GeoDataFrame representing the boundary.\n            Only used if boundary_geocode and boundary_file are None. Defaults to None.\n        crs (int, optional): The coordinate reference system (CRS) code. Defaults to 4326 (WGS84).\n\n    Returns:\n        gpd.GeoSeries: The polygon geometry representing the bounding polygon.\n    \"\"\"\n    import osmnx as ox\n\n    if sum(x is not None for x in [boundary_gdf, boundary_geocode, boundary_file]) != 1:\n        raise ValueError(\n            \"Exacly one of boundary_gdf, boundary_geocode, or boundary_shp must \\\n                         be provided\"\n        )\n\n    OK_BOUNDARY_SUFF = [\".shp\", \".geojson\", \".parquet\"]\n\n    if boundary_geocode is not None:\n        boundary_gdf = ox.geocode_to_gdf(boundary_geocode)\n    if boundary_file is not None:\n        boundary_file = Path(boundary_file)\n        if boundary_file.suffix not in OK_BOUNDARY_SUFF:\n            raise ValueError(\n                f\"Boundary file must have one of the following suffixes: {OK_BOUNDARY_SUFF}\"\n            )\n        if not boundary_file.exists():\n            raise FileNotFoundError(f\"Boundary file {boundary_file} does not exist\")\n        if boundary_file.suffix == \".parquet\":\n            boundary_gdf = gpd.read_parquet(boundary_file)\n        else:\n            boundary_gdf = gpd.read_file(boundary_file)\n            if boundary_file.suffix == \".geojson\":  # geojson standard is WGS84\n                boundary_gdf.crs = crs\n\n    if boundary_gdf.crs is not None:\n        boundary_gdf = boundary_gdf.to_crs(crs)\n    # make sure boundary_gdf is a polygon\n    if len(boundary_gdf.geom_type[boundary_gdf.geom_type != \"Polygon\"]) > 0:\n        raise ValueError(\"boundary_gdf must all be Polygons\")\n    # get the boundary as a single polygon\n    boundary_gs = gpd.GeoSeries([boundary_gdf.geometry.unary_union], crs=crs)\n\n    return boundary_gs\n
"},{"location":"api/#network_wrangler.utils.geo.get_point_geometry_from_linestring","title":"get_point_geometry_from_linestring(polyline_geometry, pos=0)","text":"

Get a point geometry from a linestring geometry.

Parameters:

Name Type Description Default polyline_geometry

shapely LineString instance

required pos int

position in the linestring to get the point from. Defaults to 0.

0 Source code in network_wrangler/utils/geo.py
def get_point_geometry_from_linestring(polyline_geometry, pos: int = 0):\n    \"\"\"Get a point geometry from a linestring geometry.\n\n    Args:\n        polyline_geometry: shapely LineString instance\n        pos: position in the linestring to get the point from. Defaults to 0.\n    \"\"\"\n    # WranglerLogger.debug(\n    #    f\"get_point_geometry_from_linestring.polyline_geometry.coords[0]: \\\n    #    {polyline_geometry.coords[0]}.\"\n    # )\n\n    # Note: when upgrading to shapely 2.0, will need to use following command\n    # _point_coords = get_coordinates(polyline_geometry).tolist()[pos]\n    return point_from_xy(*polyline_geometry.coords[pos])\n
"},{"location":"api/#network_wrangler.utils.geo.length_of_linestring_miles","title":"length_of_linestring_miles(gdf)","text":"

Returns a Series with the linestring length in miles.

Parameters:

Name Type Description Default gdf Union[GeoSeries, GeoDataFrame]

GeoDataFrame with linestring geometry. If given a GeoSeries will attempt to convert to a GeoDataFrame.

required Source code in network_wrangler/utils/geo.py
def length_of_linestring_miles(gdf: Union[gpd.GeoSeries, gpd.GeoDataFrame]) -> pd.Series:\n    \"\"\"Returns a Series with the linestring length in miles.\n\n    Args:\n        gdf: GeoDataFrame with linestring geometry.  If given a GeoSeries will attempt to convert\n            to a GeoDataFrame.\n    \"\"\"\n    # WranglerLogger.debug(f\"length_of_linestring_miles.gdf input:\\n{gdf}.\")\n    if isinstance(gdf, gpd.GeoSeries):\n        gdf = gpd.GeoDataFrame(geometry=gdf)\n\n    p_crs = gdf.estimate_utm_crs()\n    gdf = gdf.to_crs(p_crs)\n    METERS_IN_MILES = 1609.34\n    length_miles = gdf.geometry.length / METERS_IN_MILES\n    length_s = pd.Series(length_miles, index=gdf.index)\n\n    return length_s\n
"},{"location":"api/#network_wrangler.utils.geo.linestring_from_lats_lons","title":"linestring_from_lats_lons(df, lat_fields, lon_fields)","text":"

Create a LineString geometry from a DataFrame with lon/lat fields.

Parameters:

Name Type Description Default df

DataFrame with columns for lon/lat fields.

required lat_fields

list of column names for the lat fields.

required lon_fields

list of column names for the lon fields.

required Source code in network_wrangler/utils/geo.py
def linestring_from_lats_lons(df, lat_fields, lon_fields) -> gpd.GeoSeries:\n    \"\"\"Create a LineString geometry from a DataFrame with lon/lat fields.\n\n    Args:\n        df: DataFrame with columns for lon/lat fields.\n        lat_fields: list of column names for the lat fields.\n        lon_fields: list of column names for the lon fields.\n    \"\"\"\n    if len(lon_fields) != len(lat_fields):\n        raise ValueError(\"lon_fields and lat_fields lists must have the same length\")\n\n    line_geometries = gpd.GeoSeries(\n        [\n            LineString([(row[lon], row[lat]) for lon, lat in zip(lon_fields, lat_fields)])\n            for _, row in df.iterrows()\n        ]\n    )\n\n    return gpd.GeoSeries(line_geometries)\n
"},{"location":"api/#network_wrangler.utils.geo.linestring_from_nodes","title":"linestring_from_nodes(links_df, nodes_df, from_node='A', to_node='B', node_pk='model_node_id')","text":"

Creates a LineString geometry GeoSeries from a DataFrame of links and a DataFrame of nodes.

Parameters:

Name Type Description Default links_df DataFrame

DataFrame with columns for from_node and to_node.

required nodes_df GeoDataFrame

GeoDataFrame with geometry column.

required from_node str

column name in links_df for the from node. Defaults to \u201cA\u201d.

'A' to_node str

column name in links_df for the to node. Defaults to \u201cB\u201d.

'B' node_pk str

primary key column name in nodes_df. Defaults to \u201cmodel_node_id\u201d.

'model_node_id' Source code in network_wrangler/utils/geo.py
def linestring_from_nodes(\n    links_df: pd.DataFrame,\n    nodes_df: gpd.GeoDataFrame,\n    from_node: str = \"A\",\n    to_node: str = \"B\",\n    node_pk: str = \"model_node_id\",\n) -> gpd.GeoSeries:\n    \"\"\"Creates a LineString geometry GeoSeries from a DataFrame of links and a DataFrame of nodes.\n\n    Args:\n        links_df: DataFrame with columns for from_node and to_node.\n        nodes_df: GeoDataFrame with geometry column.\n        from_node: column name in links_df for the from node. Defaults to \"A\".\n        to_node: column name in links_df for the to node. Defaults to \"B\".\n        node_pk: primary key column name in nodes_df. Defaults to \"model_node_id\".\n    \"\"\"\n    assert \"geometry\" in nodes_df.columns, \"nodes_df must have a 'geometry' column\"\n\n    idx_name = \"index\" if links_df.index.name is None else links_df.index.name\n    WranglerLogger.debug(f\"Index name: {idx_name}\")\n    required_link_cols = [from_node, to_node]\n\n    if not all([col in links_df.columns for col in required_link_cols]):\n        WranglerLogger.error(\n            f\"links_df.columns missing required columns.\\n\\\n                            links_df.columns: {links_df.columns}\\n\\\n                            required_link_cols: {required_link_cols}\"\n        )\n        raise ValueError(\n            f\"links_df must have columns {required_link_cols} to create linestring from nodes\"\n        )\n\n    links_geo_df = links_df[required_link_cols].copy()\n    # need to continuously reset the index to make sure the index is the same as the link index\n    links_geo_df = (\n        links_geo_df.reset_index()\n        .merge(\n            nodes_df[[node_pk, \"geometry\"]],\n            left_on=from_node,\n            right_on=node_pk,\n            how=\"left\",\n        )\n        .set_index(idx_name)\n    )\n\n    links_geo_df = links_geo_df.rename(columns={\"geometry\": \"geometry_A\"})\n\n    links_geo_df = (\n        links_geo_df.reset_index()\n        .merge(\n            nodes_df[[node_pk, \"geometry\"]],\n            left_on=to_node,\n            right_on=node_pk,\n            how=\"left\",\n        )\n        .set_index(idx_name)\n    )\n\n    links_geo_df = links_geo_df.rename(columns={\"geometry\": \"geometry_B\"})\n\n    # makes sure all nodes exist\n    _missing_geo_links_df = links_geo_df[\n        links_geo_df[\"geometry_A\"].isnull() | links_geo_df[\"geometry_B\"].isnull()\n    ]\n    if not _missing_geo_links_df.empty:\n        missing_nodes = _missing_geo_links_df[[from_node, to_node]].values\n        WranglerLogger.error(\n            f\"Cannot create link geometry from nodes because the nodes are\\\n                             missing from the network. Missing nodes: {missing_nodes}\"\n        )\n        raise MissingNodesError(\"Specified from/to nodes are missing in nodes_df\")\n\n    # create geometry from points\n    links_geo_df[\"geometry\"] = links_geo_df.apply(\n        lambda row: LineString([row[\"geometry_A\"], row[\"geometry_B\"]]), axis=1\n    )\n\n    # convert to GeoDataFrame\n    links_gdf = gpd.GeoDataFrame(links_geo_df[\"geometry\"], geometry=links_geo_df[\"geometry\"])\n    return links_gdf[\"geometry\"]\n
"},{"location":"api/#network_wrangler.utils.geo.location_ref_from_point","title":"location_ref_from_point(geometry, sequence=1, bearing=None, distance_to_next_ref=None)","text":"

Generates a shared street point location reference.

Parameters:

Name Type Description Default geometry Point

Point shapely geometry

required sequence int

Sequence if part of polyline. Defaults to None.

1 bearing float

Direction of line if part of polyline. Defaults to None.

None distance_to_next_ref float

Distnce to next point if part of polyline. Defaults to None.

None

Returns:

Name Type Description LocationReference LocationReference

As defined by sharedStreets.io schema

Source code in network_wrangler/utils/geo.py
def location_ref_from_point(\n    geometry: Point,\n    sequence: int = 1,\n    bearing: float = None,\n    distance_to_next_ref: float = None,\n) -> LocationReference:\n    \"\"\"Generates a shared street point location reference.\n\n    Args:\n        geometry (Point): Point shapely geometry\n        sequence (int, optional): Sequence if part of polyline. Defaults to None.\n        bearing (float, optional): Direction of line if part of polyline. Defaults to None.\n        distance_to_next_ref (float, optional): Distnce to next point if part of polyline.\n            Defaults to None.\n\n    Returns:\n        LocationReference: As defined by sharedStreets.io schema\n    \"\"\"\n    lr = {\n        \"point\": LatLongCoordinates(geometry.coords[0]),\n    }\n\n    for arg in [\"sequence\", \"bearing\", \"distance_to_next_ref\"]:\n        if locals()[arg] is not None:\n            lr[arg] = locals()[arg]\n\n    return LocationReference(**lr)\n
"},{"location":"api/#network_wrangler.utils.geo.location_refs_from_linestring","title":"location_refs_from_linestring(geometry)","text":"

Generates a shared street location reference from linestring.

Parameters:

Name Type Description Default geometry LineString

Shapely LineString instance

required

Returns:

Name Type Description LocationReferences List[LocationReference]

As defined by sharedStreets.io schema

Source code in network_wrangler/utils/geo.py
def location_refs_from_linestring(geometry: LineString) -> List[LocationReference]:\n    \"\"\"Generates a shared street location reference from linestring.\n\n    Args:\n        geometry (LineString): Shapely LineString instance\n\n    Returns:\n        LocationReferences: As defined by sharedStreets.io schema\n    \"\"\"\n    return [\n        location_ref_from_point(\n            point,\n            sequence=i + 1,\n            distance_to_next_ref=point.distance(geometry.coords[i + 1]),\n            bearing=get_bearing(*point.coords[0], *geometry.coords[i + 1]),\n        )\n        for i, point in enumerate(geometry.coords[:-1])\n    ]\n
"},{"location":"api/#network_wrangler.utils.geo.offset_point_with_distance_and_bearing","title":"offset_point_with_distance_and_bearing(lon, lat, distance, bearing)","text":"

Get the new lon-lat (in degrees) given current point (lon-lat), distance and bearing.

Parameters:

Name Type Description Default lon float

longitude of original point

required lat float

latitude of original point

required distance float

distance in meters to offset point by

required bearing float

direction to offset point to in radians

required Source code in network_wrangler/utils/geo.py
def offset_point_with_distance_and_bearing(\n    lon: float, lat: float, distance: float, bearing: float\n) -> List[float]:\n    \"\"\"Get the new lon-lat (in degrees) given current point (lon-lat), distance and bearing.\n\n    Args:\n        lon: longitude of original point\n        lat: latitude of original point\n        distance: distance in meters to offset point by\n        bearing: direction to offset point to in radians\n\n    returns: list of new offset lon-lat\n    \"\"\"\n    # Earth's radius in meters\n    radius = 6378137\n\n    # convert the lat long from degree to radians\n    lat_radians = math.radians(lat)\n    lon_radians = math.radians(lon)\n\n    # calculate the new lat long in radians\n    out_lat_radians = math.asin(\n        math.sin(lat_radians) * math.cos(distance / radius)\n        + math.cos(lat_radians) * math.sin(distance / radius) * math.cos(bearing)\n    )\n\n    out_lon_radians = lon_radians + math.atan2(\n        math.sin(bearing) * math.sin(distance / radius) * math.cos(lat_radians),\n        math.cos(distance / radius) - math.sin(lat_radians) * math.sin(lat_radians),\n    )\n    # convert the new lat long back to degree\n    out_lat = math.degrees(out_lat_radians)\n    out_lon = math.degrees(out_lon_radians)\n\n    return [out_lon, out_lat]\n
"},{"location":"api/#network_wrangler.utils.geo.point_from_xy","title":"point_from_xy(x, y, xy_crs=LAT_LON_CRS, point_crs=LAT_LON_CRS)","text":"

Creates point geometry from x and y coordinates.

Parameters:

Name Type Description Default x

x coordinate, in xy_crs

required y

y coordinate, in xy_crs

required xy_crs int

coordinate reference system in ESPG code for x/y inputs. Defaults to 4326 (WGS84)

LAT_LON_CRS point_crs int

coordinate reference system in ESPG code for point output. Defaults to 4326 (WGS84)

LAT_LON_CRS Source code in network_wrangler/utils/geo.py
def point_from_xy(x, y, xy_crs: int = LAT_LON_CRS, point_crs: int = LAT_LON_CRS):\n    \"\"\"Creates point geometry from x and y coordinates.\n\n    Args:\n        x: x coordinate, in xy_crs\n        y: y coordinate, in xy_crs\n        xy_crs: coordinate reference system in ESPG code for x/y inputs. Defaults to 4326 (WGS84)\n        point_crs: coordinate reference system in ESPG code for point output.\n            Defaults to 4326 (WGS84)\n\n    Returns: Shapely Point in point_crs\n    \"\"\"\n    point = Point(x, y)\n\n    if xy_crs == point_crs:\n        check_point_valid_for_crs(point, point_crs)\n        return point\n\n    if (xy_crs, point_crs) not in transformers:\n        # store transformers in dictionary because they are an \"expensive\" operation\n        transformers[(xy_crs, point_crs)] = Transformer.from_proj(\n            Proj(init=\"epsg:\" + str(xy_crs)),\n            Proj(init=\"epsg:\" + str(point_crs)),\n            always_xy=True,  # required b/c Proj v6+ uses lon/lat instead of x/y\n        )\n\n    return transform(transformers[(xy_crs, point_crs)].transform, point)\n
"},{"location":"api/#network_wrangler.utils.geo.to_points_gdf","title":"to_points_gdf(table, ref_nodes_df=None, ref_road_net=None)","text":"

Convert a table to a GeoDataFrame.

If the table is already a GeoDataFrame, return it as is. Otherwise, attempt to convert the table to a GeoDataFrame using the following methods: 1. If the table has a \u2018geometry\u2019 column, return a GeoDataFrame using that column. 2. If the table has \u2018lat\u2019 and \u2018lon\u2019 columns, return a GeoDataFrame using those columns. 3. If the table has a \u2018*model_node_id\u2019 column, return a GeoDataFrame using that column and the nodes_df provided. If none of the above, raise a ValueError.

Parameters:

Name Type Description Default table DataFrame

DataFrame to convert to GeoDataFrame.

required ref_nodes_df GeoDataFrame

GeoDataFrame of nodes to use to convert model_node_id to geometry.

None ref_road_net 'RoadwayNetwork'

RoadwayNetwork object to use to convert model_node_id to geometry.

None

Returns:

Name Type Description GeoDataFrame GeoDataFrame

GeoDataFrame representation of the table.

Source code in network_wrangler/utils/geo.py
def to_points_gdf(\n    table: pd.DataFrame,\n    ref_nodes_df: gpd.GeoDataFrame = None,\n    ref_road_net: \"RoadwayNetwork\" = None,\n) -> gpd.GeoDataFrame:\n    \"\"\"Convert a table to a GeoDataFrame.\n\n    If the table is already a GeoDataFrame, return it as is. Otherwise, attempt to convert the\n    table to a GeoDataFrame using the following methods:\n    1. If the table has a 'geometry' column, return a GeoDataFrame using that column.\n    2. If the table has 'lat' and 'lon' columns, return a GeoDataFrame using those columns.\n    3. If the table has a '*model_node_id' column, return a GeoDataFrame using that column and the\n         nodes_df provided.\n    If none of the above, raise a ValueError.\n\n    Args:\n        table: DataFrame to convert to GeoDataFrame.\n        ref_nodes_df: GeoDataFrame of nodes to use to convert model_node_id to geometry.\n        ref_road_net: RoadwayNetwork object to use to convert model_node_id to geometry.\n\n    Returns:\n        GeoDataFrame: GeoDataFrame representation of the table.\n    \"\"\"\n    if table is gpd.GeoDataFrame:\n        return table\n\n    WranglerLogger.debug(\"Converting GTFS table to GeoDataFrame\")\n    if \"geometry\" in table.columns:\n        return gpd.GeoDataFrame(table, geometry=\"geometry\")\n\n    lat_cols = list(filter(lambda col: \"lat\" in col, table.columns))\n    lon_cols = list(filter(lambda col: \"lon\" in col, table.columns))\n    model_node_id_cols = list(filter(lambda col: \"model_node_id\" in col, table.columns))\n\n    if not (lat_cols and lon_cols) or not model_node_id_cols:\n        raise ValueError(\n            \"Could not find lat/long, geometry columns or *model_node_id column in \\\n                         table necessary to convert to GeoDataFrame\"\n        )\n\n    if lat_cols and lon_cols:\n        # using first found lat and lon columns\n        return gpd.GeoDataFrame(\n            table,\n            geometry=gpd.points_from_xy(table[lon_cols[0]], table[lat_cols[0]]),\n            crs=\"EPSG:4326\",\n        )\n\n    if model_node_id_cols:\n        node_id_col = model_node_id_cols[0]\n\n        if ref_nodes_df is None:\n            if ref_road_net is None:\n                raise ValueError(\n                    \"Must provide either nodes_df or road_net to convert \\\n                                 model_node_id to geometry\"\n                )\n            ref_nodes_df = ref_road_net.nodes_df\n\n        WranglerLogger.debug(\"Converting table to GeoDataFrame using model_node_id\")\n\n        _table = table.merge(\n            ref_nodes_df[[\"model_node_id\", \"geometry\"]],\n            left_on=node_id_col,\n            right_on=\"model_node_id\",\n        )\n        return gpd.GeoDataFrame(_table, geometry=\"geometry\")\n\n    raise ValueError(\n        \"Could not find lat/long, geometry columns or *model_node_id column in table \\\n                     necessary to convert to GeoDataFrame\"\n    )\n
"},{"location":"api/#network_wrangler.utils.geo.update_nodes_in_linestring_geometry","title":"update_nodes_in_linestring_geometry(original_df, updated_nodes_df, position)","text":"

Updates the nodes in a linestring geometry and returns updated geometry.

Parameters:

Name Type Description Default original_df GeoDataFrame

GeoDataFrame with the model_node_id and linestring geometry

required updated_nodes_df GeoDataFrame

GeoDataFrame with updated node geometries.

required position int

position in the linestring to update with the node.

required Source code in network_wrangler/utils/geo.py
def update_nodes_in_linestring_geometry(\n    original_df: gpd.GeoDataFrame,\n    updated_nodes_df: gpd.GeoDataFrame,\n    position: int,\n) -> gpd.GeoSeries:\n    \"\"\"Updates the nodes in a linestring geometry and returns updated geometry.\n\n    Args:\n        original_df: GeoDataFrame with the `model_node_id` and linestring geometry\n        updated_nodes_df: GeoDataFrame with updated node geometries.\n        position: position in the linestring to update with the node.\n    \"\"\"\n    LINK_FK_NODE = [\"A\", \"B\"]\n    original_index = original_df.index\n\n    updated_df = original_df.reset_index().merge(\n        updated_nodes_df[[\"model_node_id\", \"geometry\"]],\n        left_on=LINK_FK_NODE[position],\n        right_on=\"model_node_id\",\n        suffixes=(\"\", \"_node\"),\n    )\n\n    updated_df[\"geometry\"] = updated_df.apply(\n        lambda row: update_points_in_linestring(\n            row[\"geometry\"], row[\"geometry_node\"].coords[0], position\n        ),\n        axis=1,\n    )\n\n    updated_df = updated_df.reset_index().set_index(original_index.names)\n\n    WranglerLogger.debug(f\"updated_df - AFTER: \\n {updated_df.geometry}\")\n    return updated_df[\"geometry\"]\n
"},{"location":"api/#network_wrangler.utils.geo.update_point_geometry","title":"update_point_geometry(df, ref_point_df, lon_field='X', lat_field='Y', id_field='model_node_id', ref_lon_field='X', ref_lat_field='Y', ref_id_field='model_node_id')","text":"

Returns copy of df with lat and long fields updated with geometry from ref_point_df.

NOTE: does not update \u201cgeometry\u201d field if it exists.

Source code in network_wrangler/utils/geo.py
def update_point_geometry(\n    df: pd.DataFrame,\n    ref_point_df: pd.DataFrame,\n    lon_field: str = \"X\",\n    lat_field: str = \"Y\",\n    id_field: str = \"model_node_id\",\n    ref_lon_field: str = \"X\",\n    ref_lat_field: str = \"Y\",\n    ref_id_field: str = \"model_node_id\",\n) -> pd.DataFrame:\n    \"\"\"Returns copy of df with lat and long fields updated with geometry from ref_point_df.\n\n    NOTE: does not update \"geometry\" field if it exists.\n    \"\"\"\n    df = copy.deepcopy(df)\n\n    ref_df = ref_point_df.rename(\n        columns={\n            ref_lon_field: lon_field,\n            ref_lat_field: lat_field,\n            ref_id_field: id_field,\n        }\n    )\n\n    updated_df = update_df_by_col_value(\n        df,\n        ref_df[[id_field, lon_field, lat_field]],\n        id_field,\n        properties=[lat_field, lon_field],\n        fail_if_missing=False,\n    )\n    return updated_df\n
"},{"location":"api/#network_wrangler.utils.geo.update_points_in_linestring","title":"update_points_in_linestring(linestring, updated_coords, position)","text":"

Replaces a point in a linestring with a new point.

Parameters:

Name Type Description Default linestring LineString

original_linestring

required updated_coords List[float]

updated poimt coordinates

required position int

position in the linestring to update

required Source code in network_wrangler/utils/geo.py
def update_points_in_linestring(\n    linestring: LineString, updated_coords: List[float], position: int\n):\n    \"\"\"Replaces a point in a linestring with a new point.\n\n    Args:\n        linestring (LineString): original_linestring\n        updated_coords (List[float]): updated poimt coordinates\n        position (int): position in the linestring to update\n    \"\"\"\n    coords = [c for c in linestring.coords]\n    coords[position] = updated_coords\n    return LineString(coords)\n
"},{"location":"api/#network_wrangler.utils.df_accessors.DictQueryAccessor","title":"DictQueryAccessor","text":"

Query link, node and shape dataframes using project selection dictionary.

Will overlook any keys which are not columns in the dataframe.

Usage:

selection_dict = {\n    \"lanes\":[1,2,3],\n    \"name\":['6th','Sixth','sixth'],\n    \"drive_access\": 1,\n}\nselected_links_df = links_df.dict_query(selection_dict)\n
Source code in network_wrangler/utils/df_accessors.py
@pd.api.extensions.register_dataframe_accessor(\"dict_query\")\nclass DictQueryAccessor:\n    \"\"\"Query link, node and shape dataframes using project selection dictionary.\n\n    Will overlook any keys which are not columns in the dataframe.\n\n    Usage:\n\n    ```\n    selection_dict = {\n        \"lanes\":[1,2,3],\n        \"name\":['6th','Sixth','sixth'],\n        \"drive_access\": 1,\n    }\n    selected_links_df = links_df.dict_query(selection_dict)\n    ```\n\n    \"\"\"\n\n    def __init__(self, pandas_obj):\n        \"\"\"Initialization function for the dictionary query accessor.\"\"\"\n        self._obj = pandas_obj\n\n    def __call__(self, selection_dict: dict, return_all_if_none: bool = False):\n        \"\"\"Queries the dataframe using the selection dictionary.\n\n        Args:\n            selection_dict (dict): _description_\n            return_all_if_none (bool, optional): If True, will return entire df if dict has\n                 no values. Defaults to False.\n        \"\"\"\n        _selection_dict = {\n            k: v for k, v in selection_dict.items() if k in self._obj.columns and v is not None\n        }\n\n        if not _selection_dict:\n            if return_all_if_none:\n                return self._obj\n            raise ValueError(f\"Relevant part of selection dictionary is empty: {selection_dict}\")\n\n        _sel_query = dict_to_query(_selection_dict)\n        WranglerLogger.debug(f\"_sel_query: \\n   {_sel_query}\")\n        _df = self._obj.query(_sel_query, engine=\"python\")\n\n        if len(_df) == 0:\n            WranglerLogger.warning(\n                f\"No records found in df \\\n                  using selection: {selection_dict}\"\n            )\n        return _df\n
"},{"location":"api/#network_wrangler.utils.df_accessors.DictQueryAccessor.__call__","title":"__call__(selection_dict, return_all_if_none=False)","text":"

Queries the dataframe using the selection dictionary.

Parameters:

Name Type Description Default selection_dict dict

description

required return_all_if_none bool

If True, will return entire df if dict has no values. Defaults to False.

False Source code in network_wrangler/utils/df_accessors.py
def __call__(self, selection_dict: dict, return_all_if_none: bool = False):\n    \"\"\"Queries the dataframe using the selection dictionary.\n\n    Args:\n        selection_dict (dict): _description_\n        return_all_if_none (bool, optional): If True, will return entire df if dict has\n             no values. Defaults to False.\n    \"\"\"\n    _selection_dict = {\n        k: v for k, v in selection_dict.items() if k in self._obj.columns and v is not None\n    }\n\n    if not _selection_dict:\n        if return_all_if_none:\n            return self._obj\n        raise ValueError(f\"Relevant part of selection dictionary is empty: {selection_dict}\")\n\n    _sel_query = dict_to_query(_selection_dict)\n    WranglerLogger.debug(f\"_sel_query: \\n   {_sel_query}\")\n    _df = self._obj.query(_sel_query, engine=\"python\")\n\n    if len(_df) == 0:\n        WranglerLogger.warning(\n            f\"No records found in df \\\n              using selection: {selection_dict}\"\n        )\n    return _df\n
"},{"location":"api/#network_wrangler.utils.df_accessors.DictQueryAccessor.__init__","title":"__init__(pandas_obj)","text":"

Initialization function for the dictionary query accessor.

Source code in network_wrangler/utils/df_accessors.py
def __init__(self, pandas_obj):\n    \"\"\"Initialization function for the dictionary query accessor.\"\"\"\n    self._obj = pandas_obj\n
"},{"location":"api/#network_wrangler.utils.df_accessors.dfHash","title":"dfHash","text":"

Creates a dataframe hash that is compatable with geopandas and various metadata.

Definitely not the fastest, but she seems to work where others have failed.

Source code in network_wrangler/utils/df_accessors.py
@pd.api.extensions.register_dataframe_accessor(\"df_hash\")\nclass dfHash:\n    \"\"\"Creates a dataframe hash that is compatable with geopandas and various metadata.\n\n    Definitely not the fastest, but she seems to work where others have failed.\n    \"\"\"\n\n    def __init__(self, pandas_obj):\n        \"\"\"Initialization function for the dataframe hash.\"\"\"\n        self._obj = pandas_obj\n\n    def __call__(self):\n        \"\"\"Function to hash the dataframe.\"\"\"\n        _value = str(self._obj.values).encode()\n        hash = hashlib.sha1(_value).hexdigest()\n        return hash\n
"},{"location":"api/#network_wrangler.utils.df_accessors.dfHash.__call__","title":"__call__()","text":"

Function to hash the dataframe.

Source code in network_wrangler/utils/df_accessors.py
def __call__(self):\n    \"\"\"Function to hash the dataframe.\"\"\"\n    _value = str(self._obj.values).encode()\n    hash = hashlib.sha1(_value).hexdigest()\n    return hash\n
"},{"location":"api/#network_wrangler.utils.df_accessors.dfHash.__init__","title":"__init__(pandas_obj)","text":"

Initialization function for the dataframe hash.

Source code in network_wrangler/utils/df_accessors.py
def __init__(self, pandas_obj):\n    \"\"\"Initialization function for the dataframe hash.\"\"\"\n    self._obj = pandas_obj\n
"},{"location":"api/#network_wrangler.logger.setup_logging","title":"setup_logging(info_log_filename=None, debug_log_filename='wrangler_{}.debug.log'.format(datetime.now().strftime('%Y_%m_%d__%H_%M_%S')), std_out_level='info')","text":"

Sets up the WranglerLogger w.r.t. the debug file location and if logging to console.

Called by the test_logging fixture in conftest.py and can be called by the user to setup logging for their session. If called multiple times, the logger will be reset.

Parameters:

Name Type Description Default info_log_filename str

the location of the log file that will get created to add the INFO log. The INFO Log is terse, just gives the bare minimum of details. Defaults to file in cwd() wrangler_[datetime].log. To turn off logging to a file, use log_filename = None.

None debug_log_filename str

the location of the log file that will get created to add the DEBUG log The DEBUG log is very noisy, for debugging. Defaults to file in cwd() wrangler_[datetime].log. To turn off logging to a file, use log_filename = None.

format(strftime('%Y_%m_%d__%H_%M_%S')) std_out_level str

the level of logging to the console. One of \u201cinfo\u201d, \u201cwarning\u201d, \u201cdebug\u201d. Defaults to \u201cinfo\u201d but will be set to ERROR if nothing provided matches.

'info' Source code in network_wrangler/logger.py
def setup_logging(\n    info_log_filename: str = None,\n    debug_log_filename: str = \"wrangler_{}.debug.log\".format(\n        datetime.now().strftime(\"%Y_%m_%d__%H_%M_%S\")\n    ),\n    std_out_level: str = \"info\",\n):\n    \"\"\"Sets up the WranglerLogger w.r.t. the debug file location and if logging to console.\n\n    Called by the test_logging fixture in conftest.py and can be called by the user to setup\n    logging for their session. If called multiple times, the logger will be reset.\n\n    Args:\n        info_log_filename: the location of the log file that will get created to add the INFO log.\n            The INFO Log is terse, just gives the bare minimum of details.\n            Defaults to file in cwd() `wrangler_[datetime].log`. To turn off logging to a file,\n            use log_filename = None.\n        debug_log_filename: the location of the log file that will get created to add the DEBUG log\n            The DEBUG log is very noisy, for debugging. Defaults to file in cwd()\n            `wrangler_[datetime].log`. To turn off logging to a file, use log_filename = None.\n        std_out_level: the level of logging to the console. One of \"info\", \"warning\", \"debug\".\n            Defaults to \"info\" but will be set to ERROR if nothing provided matches.\n    \"\"\"\n    # add function variable so that we know if logging has been called\n    setup_logging.called = True\n\n    # Clear handles if any exist already\n    WranglerLogger.handlers = []\n\n    WranglerLogger.setLevel(logging.DEBUG)\n\n    FORMAT = logging.Formatter(\n        \"%(asctime)-15s %(levelname)s: %(message)s\", datefmt=\"%Y-%m-%d %H:%M:%S,\"\n    )\n    if not info_log_filename:\n        info_log_filename = os.path.join(\n            os.getcwd(),\n            \"network_wrangler_{}.info.log\".format(datetime.now().strftime(\"%Y_%m_%d__%H_%M_%S\")),\n        )\n\n    info_file_handler = logging.StreamHandler(open(info_log_filename, \"w\"))\n    info_file_handler.setLevel(logging.INFO)\n    info_file_handler.setFormatter(FORMAT)\n    WranglerLogger.addHandler(info_file_handler)\n\n    # create debug file only when debug_log_filename is provided\n    if debug_log_filename:\n        debug_log_handler = logging.StreamHandler(open(debug_log_filename, \"w\"))\n        debug_log_handler.setLevel(logging.DEBUG)\n        debug_log_handler.setFormatter(FORMAT)\n        WranglerLogger.addHandler(debug_log_handler)\n\n    console_handler = logging.StreamHandler(sys.stdout)\n    console_handler.setLevel(logging.DEBUG)\n    console_handler.setFormatter(FORMAT)\n    WranglerLogger.addHandler(console_handler)\n    if std_out_level == \"debug\":\n        console_handler.setLevel(logging.DEBUG)\n    elif std_out_level == \"info\":\n        console_handler.setLevel(logging.DEBUG)\n    elif std_out_level == \"warning\":\n        console_handler.setLevel(logging.WARNING)\n    else:\n        console_handler.setLevel(logging.ERROR)\n
"},{"location":"data_models/","title":"Data Models","text":""},{"location":"data_models/#roadway","title":"Roadway","text":""},{"location":"data_models/#tables","title":"Tables","text":"

Datamodels for Roadway Network Tables.

This module contains the datamodels used to validate the format and types of Roadway Network tables.

Includes: - RoadLinksTable - RoadNodesTable - RoadShapesTable - ExplodedScopedLinkPropertyTable

"},{"location":"data_models/#network_wrangler.models.roadway.tables.ExplodedScopedLinkPropertyTable","title":"ExplodedScopedLinkPropertyTable","text":"

Bases: DataFrameModel

Datamodel used to validate an exploded links_df by scope.

Source code in network_wrangler/models/roadway/tables.py
class ExplodedScopedLinkPropertyTable(DataFrameModel):\n    \"\"\"Datamodel used to validate an exploded links_df by scope.\"\"\"\n\n    model_link_id: Series[int]\n    category: Series[Any]\n    timespan: Series[list[str]]\n    start_time: Series[dt.datetime]\n    end_time: Series[dt.datetime]\n    scoped: Series[Any] = pa.Field(default=None, nullable=True)\n\n    class Config:\n        \"\"\"Config for ExplodedScopedLinkPropertySchema.\"\"\"\n\n        name = \"ExplodedScopedLinkPropertySchema\"\n        coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.ExplodedScopedLinkPropertyTable.Config","title":"Config","text":"

Config for ExplodedScopedLinkPropertySchema.

Source code in network_wrangler/models/roadway/tables.py
class Config:\n    \"\"\"Config for ExplodedScopedLinkPropertySchema.\"\"\"\n\n    name = \"ExplodedScopedLinkPropertySchema\"\n    coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadLinksTable","title":"RoadLinksTable","text":"

Bases: DataFrameModel

Datamodel used to validate if links_df is of correct format and types.

Source code in network_wrangler/models/roadway/tables.py
class RoadLinksTable(DataFrameModel):\n    \"\"\"Datamodel used to validate if links_df is of correct format and types.\"\"\"\n\n    model_link_id: Series[int] = pa.Field(coerce=True, unique=True)\n    model_link_id_idx: Optional[Series[int]] = pa.Field(coerce=True, unique=True)\n    A: Series[int] = pa.Field(nullable=False, coerce=True)\n    B: Series[int] = pa.Field(nullable=False, coerce=True)\n    geometry: GeoSeries = pa.Field(nullable=False)\n    name: Series[str] = pa.Field(nullable=False)\n    rail_only: Series[bool] = pa.Field(coerce=True, nullable=False, default=False)\n    bus_only: Series[bool] = pa.Field(coerce=True, nullable=False, default=False)\n    drive_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True)\n    bike_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True)\n    walk_access: Series[bool] = pa.Field(coerce=True, nullable=False, default=True)\n    distance: Series[float] = pa.Field(coerce=True, nullable=False)\n\n    roadway: Series[str] = pa.Field(nullable=False, default=\"road\")\n    managed: Series[int] = pa.Field(coerce=True, nullable=False, default=0)\n\n    shape_id: Series[str] = pa.Field(coerce=True, nullable=True)\n    lanes: Series[Any] = pa.Field(coerce=True, nullable=True, default=0)\n    price: Series[float] = pa.Field(coerce=True, nullable=False, default=0)\n\n    # Optional Fields\n    access: Optional[Series[Any]] = pa.Field(coerce=True, nullable=True, default=None)\n\n    sc_lanes: Optional[Series[object]] = pa.Field(coerce=True, nullable=True, default=None)\n    sc_price: Optional[Series[object]] = pa.Field(coerce=True, nullable=True, default=None)\n\n    ML_lanes: Optional[Series[Int64]] = pa.Field(coerce=True, nullable=True, default=None)\n    ML_price: Optional[Series[float]] = pa.Field(coerce=True, nullable=True, default=0)\n    ML_access: Optional[Series[Any]] = pa.Field(coerce=True, nullable=True, default=True)\n    ML_access_point: Optional[Series[bool]] = pa.Field(\n        coerce=True,\n        default=False,\n    )\n    ML_egress_point: Optional[Series[bool]] = pa.Field(\n        coerce=True,\n        default=False,\n    )\n    sc_ML_lanes: Optional[Series[object]] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=None,\n    )\n    sc_ML_price: Optional[Series[object]] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=None,\n    )\n    sc_ML_access: Optional[Series[object]] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=None,\n    )\n\n    ML_geometry: Optional[GeoSeries] = pa.Field(nullable=True, coerce=True, default=None)\n    ML_shape_id: Optional[Series[str]] = pa.Field(nullable=True, coerce=True, default=None)\n\n    truck_access: Optional[Series[bool]] = pa.Field(coerce=True, nullable=True, default=True)\n    osm_link_id: Series[str] = pa.Field(coerce=True, nullable=True, default=\"\")\n    # todo this should be List[dict] but ranch output something else so had to have it be Any.\n    locationReferences: Optional[Series[Any]] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=\"\",\n    )\n\n    GP_A: Optional[Series[Int64]] = pa.Field(coerce=True, nullable=True, default=None)\n    GP_B: Optional[Series[Int64]] = pa.Field(coerce=True, nullable=True, default=None)\n\n    class Config:\n        \"\"\"Config for RoadLinksTable.\"\"\"\n\n        name = \"RoadLinksTable\"\n        add_missing_columns = True\n        coerce = True\n\n    # @pa.dataframe_check\n    # def unique_ab(cls, df: pd.DataFrame) -> bool:\n    #     \"\"\"Check that combination of A and B are unique.\"\"\"\n    #     return ~df[[\"A\", \"B\"]].duplicated()\n\n    # TODO add check that if there is managed>1 anywhere, that ML_ columns are present.\n\n    @pa.dataframe_check\n    def check_scoped_fields(cls, df: pd.DataFrame) -> Series[bool]:\n        \"\"\"Checks that all fields starting with 'sc_' or 'sc_ML_' are valid ScopedLinkValueList.\n\n        Custom check to validate fields starting with 'sc_' or 'sc_ML_'\n        against a ScopedLinkValueItem model, handling both mandatory and optional fields.\n        \"\"\"\n        scoped_fields = [\n            col for col in df.columns if col.startswith(\"sc_\") or col.startswith(\"sc_ML\")\n        ]\n        results = []\n\n        for field in scoped_fields:\n            if df[field].notna().any():\n                results.append(\n                    df[field].dropna().apply(validate_pyd, args=(ScopedLinkValueList,)).all()\n                )\n            else:\n                # Handling optional fields: Assume validation is true if the field is entirely NA\n                results.append(True)\n\n        # Combine all results: True if all fields pass validation\n        return pd.Series(all(results), index=df.index)\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadLinksTable.Config","title":"Config","text":"

Config for RoadLinksTable.

Source code in network_wrangler/models/roadway/tables.py
class Config:\n    \"\"\"Config for RoadLinksTable.\"\"\"\n\n    name = \"RoadLinksTable\"\n    add_missing_columns = True\n    coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadLinksTable.check_scoped_fields","title":"check_scoped_fields(df)","text":"

Checks that all fields starting with \u2018sc_\u2019 or \u2018sc_ML_\u2019 are valid ScopedLinkValueList.

Custom check to validate fields starting with \u2018sc_\u2019 or \u2018sc_ML_\u2019 against a ScopedLinkValueItem model, handling both mandatory and optional fields.

Source code in network_wrangler/models/roadway/tables.py
@pa.dataframe_check\ndef check_scoped_fields(cls, df: pd.DataFrame) -> Series[bool]:\n    \"\"\"Checks that all fields starting with 'sc_' or 'sc_ML_' are valid ScopedLinkValueList.\n\n    Custom check to validate fields starting with 'sc_' or 'sc_ML_'\n    against a ScopedLinkValueItem model, handling both mandatory and optional fields.\n    \"\"\"\n    scoped_fields = [\n        col for col in df.columns if col.startswith(\"sc_\") or col.startswith(\"sc_ML\")\n    ]\n    results = []\n\n    for field in scoped_fields:\n        if df[field].notna().any():\n            results.append(\n                df[field].dropna().apply(validate_pyd, args=(ScopedLinkValueList,)).all()\n            )\n        else:\n            # Handling optional fields: Assume validation is true if the field is entirely NA\n            results.append(True)\n\n    # Combine all results: True if all fields pass validation\n    return pd.Series(all(results), index=df.index)\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadNodesTable","title":"RoadNodesTable","text":"

Bases: DataFrameModel

Datamodel used to validate if links_df is of correct format and types.

Source code in network_wrangler/models/roadway/tables.py
class RoadNodesTable(DataFrameModel):\n    \"\"\"Datamodel used to validate if links_df is of correct format and types.\"\"\"\n\n    model_node_id: Series[int] = pa.Field(coerce=True, unique=True, nullable=False)\n    model_node_idx: Optional[Series[int]] = pa.Field(coerce=True, unique=True, nullable=False)\n    geometry: GeoSeries\n    X: Series[float] = pa.Field(coerce=True, nullable=False)\n    Y: Series[float] = pa.Field(coerce=True, nullable=False)\n\n    # optional fields\n    osm_node_id: Series[str] = pa.Field(\n        coerce=True,\n        nullable=True,\n        default=\"\",\n    )\n\n    inboundReferenceIds: Optional[Series[list[str]]] = pa.Field(coerce=True, nullable=True)\n    outboundReferenceIds: Optional[Series[list[str]]] = pa.Field(coerce=True, nullable=True)\n\n    class Config:\n        \"\"\"Config for RoadNodesTable.\"\"\"\n\n        name = \"RoadNodesTable\"\n        add_missing_columns = True\n        coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadNodesTable.Config","title":"Config","text":"

Config for RoadNodesTable.

Source code in network_wrangler/models/roadway/tables.py
class Config:\n    \"\"\"Config for RoadNodesTable.\"\"\"\n\n    name = \"RoadNodesTable\"\n    add_missing_columns = True\n    coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadShapesTable","title":"RoadShapesTable","text":"

Bases: DataFrameModel

Datamodel used to validate if links_df is of correct format and types.

Source code in network_wrangler/models/roadway/tables.py
class RoadShapesTable(DataFrameModel):\n    \"\"\"Datamodel used to validate if links_df is of correct format and types.\"\"\"\n\n    shape_id: Series[str] = pa.Field(unique=False)\n    shape_id_idx: Optional[Series[int]] = pa.Field(unique=False)\n\n    geometry: GeoSeries = pa.Field()\n    ref_shape_id: Optional[Series] = pa.Field(nullable=True)\n\n    class Config:\n        \"\"\"Config for RoadShapesTable.\"\"\"\n\n        name = \"ShapesSchema\"\n        coerce = True\n
"},{"location":"data_models/#network_wrangler.models.roadway.tables.RoadShapesTable.Config","title":"Config","text":"

Config for RoadShapesTable.

Source code in network_wrangler/models/roadway/tables.py
class Config:\n    \"\"\"Config for RoadShapesTable.\"\"\"\n\n    name = \"ShapesSchema\"\n    coerce = True\n
"},{"location":"data_models/#types","title":"Types","text":"

Complex roadway types defined using Pydantic models to facilitation validation.

"},{"location":"data_models/#network_wrangler.models.roadway.types.LocationReferences","title":"LocationReferences = conlist(LocationReference, min_length=2) module-attribute","text":"

List of at least two LocationReferences which define a path.

"},{"location":"data_models/#network_wrangler.models.roadway.types.LocationReference","title":"LocationReference","text":"

Bases: BaseModel

SharedStreets-defined object for location reference.

Source code in network_wrangler/models/roadway/types.py
class LocationReference(BaseModel):\n    \"\"\"SharedStreets-defined object for location reference.\"\"\"\n\n    sequence: PositiveInt\n    point: LatLongCoordinates\n    bearing: float = Field(None, ge=-360, le=360)\n    distanceToNextRef: NonNegativeFloat\n    intersectionId: str\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopeLinkValueError","title":"ScopeLinkValueError","text":"

Bases: Exception

Raised when there is an issue with ScopedLinkValueList.

Source code in network_wrangler/models/roadway/types.py
class ScopeLinkValueError(Exception):\n    \"\"\"Raised when there is an issue with ScopedLinkValueList.\"\"\"\n\n    pass\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueItem","title":"ScopedLinkValueItem","text":"

Bases: RecordModel

Define a link property scoped by timespan or category.

Source code in network_wrangler/models/roadway/types.py
class ScopedLinkValueItem(RecordModel):\n    \"\"\"Define a link property scoped by timespan or category.\"\"\"\n\n    require_any_of = [\"category\", \"timespan\"]\n    model_config = ConfigDict(extra=\"forbid\")\n    category: Optional[Union[str, int]] = Field(default=DEFAULT_CATEGORY)\n    timespan: Optional[list[TimeString]] = Field(default=DEFAULT_TIMESPAN)\n    value: Union[int, float, str]\n\n    @property\n    def timespan_dt(self) -> list[list[datetime]]:\n        \"\"\"Convert timespan to list of datetime objects.\"\"\"\n        return str_to_time_list(self.timespan)\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueItem.timespan_dt","title":"timespan_dt: list[list[datetime]] property","text":"

Convert timespan to list of datetime objects.

"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueList","title":"ScopedLinkValueList","text":"

Bases: RootListMixin, RootModel

List of non-conflicting ScopedLinkValueItems.

Source code in network_wrangler/models/roadway/types.py
class ScopedLinkValueList(RootListMixin, RootModel):\n    \"\"\"List of non-conflicting ScopedLinkValueItems.\"\"\"\n\n    root: list[ScopedLinkValueItem]\n\n    def overlapping_timespans(self, timespan: Timespan):\n        \"\"\"Identify overlapping timespans in the list.\"\"\"\n        timespan_dt = str_to_time_list(timespan)\n        return [i for i in self if dt_overlaps(i.timespan_dt, timespan_dt)]\n\n    @model_validator(mode=\"after\")\n    def check_conflicting_scopes(self):\n        \"\"\"Check for conflicting scopes in the list.\"\"\"\n        conflicts = []\n        for i in self:\n            if i.timespan == DEFAULT_TIMESPAN:\n                continue\n            overlapping_ts_i = self.overlapping_timespans(i.timespan)\n            for j in overlapping_ts_i:\n                if j == i:\n                    continue\n                if j.category == i.category:\n                    conflicts.append((i, j))\n        if conflicts:\n            WranglerLogger.error(\"Found conflicting scopes in ScopedPropertySetList:\\n{conflicts}\")\n            raise ScopeLinkValueError(\"Conflicting scopes in ScopedPropertySetList\")\n\n        return self\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueList.check_conflicting_scopes","title":"check_conflicting_scopes()","text":"

Check for conflicting scopes in the list.

Source code in network_wrangler/models/roadway/types.py
@model_validator(mode=\"after\")\ndef check_conflicting_scopes(self):\n    \"\"\"Check for conflicting scopes in the list.\"\"\"\n    conflicts = []\n    for i in self:\n        if i.timespan == DEFAULT_TIMESPAN:\n            continue\n        overlapping_ts_i = self.overlapping_timespans(i.timespan)\n        for j in overlapping_ts_i:\n            if j == i:\n                continue\n            if j.category == i.category:\n                conflicts.append((i, j))\n    if conflicts:\n        WranglerLogger.error(\"Found conflicting scopes in ScopedPropertySetList:\\n{conflicts}\")\n        raise ScopeLinkValueError(\"Conflicting scopes in ScopedPropertySetList\")\n\n    return self\n
"},{"location":"data_models/#network_wrangler.models.roadway.types.ScopedLinkValueList.overlapping_timespans","title":"overlapping_timespans(timespan)","text":"

Identify overlapping timespans in the list.

Source code in network_wrangler/models/roadway/types.py
def overlapping_timespans(self, timespan: Timespan):\n    \"\"\"Identify overlapping timespans in the list.\"\"\"\n    timespan_dt = str_to_time_list(timespan)\n    return [i for i in self if dt_overlaps(i.timespan_dt, timespan_dt)]\n
"},{"location":"data_models/#transit","title":"Transit","text":"

Main functionality for GTFS tables including Feed object.

"},{"location":"data_models/#network_wrangler.transit.feed.feed.Feed","title":"Feed","text":"

Bases: DBModelMixin

Wrapper class around Wrangler flavored GTFS feed.

Most functionality derives from mixin class DBModelMixin which provides: - validation of tables to schemas when setting a table attribute (e.g. self.trips = trips_df) - validation of fks when setting a table attribute (e.g. self.trips = trips_df) - hashing and deep copy functionality - overload of eq to apply only to tables in table_names. - convenience methods for accessing tables

Attributes:

Name Type Description table_names

list of table names in GTFS feed.

tables

list tables as dataframes.

stop_times

stop_times dataframe with roadway node_ids

stops

stops dataframe

shapes

shapes dataframe

trips

trips dataframe

frequencies

frequencies dataframe

routes

route dataframe

net

TransitNetwork object

Source code in network_wrangler/transit/feed/feed.py
class Feed(DBModelMixin):\n    \"\"\"Wrapper class around Wrangler flavored GTFS feed.\n\n    Most functionality derives from mixin class DBModelMixin which provides:\n    - validation of tables to schemas when setting a table attribute (e.g. self.trips = trips_df)\n    - validation of fks when setting a table attribute (e.g. self.trips = trips_df)\n    - hashing and deep copy functionality\n    - overload of __eq__ to apply only to tables in table_names.\n    - convenience methods for accessing tables\n\n    Attributes:\n        table_names: list of table names in GTFS feed.\n        tables: list tables as dataframes.\n        stop_times: stop_times dataframe with roadway node_ids\n        stops: stops dataframe\n        shapes: shapes dataframe\n        trips: trips dataframe\n        frequencies: frequencies dataframe\n        routes: route dataframe\n        net: TransitNetwork object\n    \"\"\"\n\n    # the ordering here matters because the stops need to be added before stop_times if\n    # stop times needs to be converted\n    _table_models = {\n        \"agencies\": AgenciesTable,\n        \"frequencies\": FrequenciesTable,\n        \"routes\": RoutesTable,\n        \"shapes\": WranglerShapesTable,\n        \"stops\": WranglerStopsTable,\n        \"trips\": TripsTable,\n        \"stop_times\": WranglerStopTimesTable,\n    }\n\n    _converters = {\"stop_times\": gtfs_to_wrangler_stop_times}\n\n    table_names = [\n        \"frequencies\",\n        \"routes\",\n        \"shapes\",\n        \"stops\",\n        \"trips\",\n        \"stop_times\",\n    ]\n\n    optional_table_names = [\"agencies\"]\n\n    def __init__(self, **kwargs):\n        \"\"\"Create a Feed object from a dictionary of DataFrames representing a GTFS feed.\n\n        Args:\n            kwargs: A dictionary containing DataFrames representing the tables of a GTFS feed.\n        \"\"\"\n        self._net = None\n        self.feed_path: Path = None\n        self.initialize_tables(**kwargs)\n\n        # Set extra provided attributes but just FYI in logger.\n        extra_attr = {k: v for k, v in kwargs.items() if k not in self.table_names}\n        if extra_attr:\n            WranglerLogger.info(f\"Adding additional attributes to Feed: {extra_attr.keys()}\")\n        for k, v in extra_attr:\n            self.__setattr__(k, v)\n\n    def set_by_id(\n        self,\n        table_name: str,\n        set_df: pd.DataFrame,\n        id_property: str = \"trip_id\",\n        properties: list[str] = None,\n    ):\n        \"\"\"Set property values in a specific table for a list of IDs.\n\n        Args:\n            table_name (str): Name of the table to modify.\n            set_df (pd.DataFrame): DataFrame with columns 'trip_id' and 'value' containing\n                trip IDs and values to set for the specified property.\n            id_property: Property to use as ID to set by. Defaults to \"trip_id.\n            properties: List of properties to set which are in set_df. If not specified, will set\n                all properties.\n        \"\"\"\n        table_df = self.get_table(table_name)\n        updated_df = update_df_by_col_value(table_df, set_df, id_property, properties=properties)\n        self.__dict__[table_name] = updated_df\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.Feed.__init__","title":"__init__(**kwargs)","text":"

Create a Feed object from a dictionary of DataFrames representing a GTFS feed.

Parameters:

Name Type Description Default kwargs

A dictionary containing DataFrames representing the tables of a GTFS feed.

{} Source code in network_wrangler/transit/feed/feed.py
def __init__(self, **kwargs):\n    \"\"\"Create a Feed object from a dictionary of DataFrames representing a GTFS feed.\n\n    Args:\n        kwargs: A dictionary containing DataFrames representing the tables of a GTFS feed.\n    \"\"\"\n    self._net = None\n    self.feed_path: Path = None\n    self.initialize_tables(**kwargs)\n\n    # Set extra provided attributes but just FYI in logger.\n    extra_attr = {k: v for k, v in kwargs.items() if k not in self.table_names}\n    if extra_attr:\n        WranglerLogger.info(f\"Adding additional attributes to Feed: {extra_attr.keys()}\")\n    for k, v in extra_attr:\n        self.__setattr__(k, v)\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.Feed.set_by_id","title":"set_by_id(table_name, set_df, id_property='trip_id', properties=None)","text":"

Set property values in a specific table for a list of IDs.

Parameters:

Name Type Description Default table_name str

Name of the table to modify.

required set_df DataFrame

DataFrame with columns \u2018trip_id\u2019 and \u2018value\u2019 containing trip IDs and values to set for the specified property.

required id_property str

Property to use as ID to set by. Defaults to \u201ctrip_id.

'trip_id' properties list[str]

List of properties to set which are in set_df. If not specified, will set all properties.

None Source code in network_wrangler/transit/feed/feed.py
def set_by_id(\n    self,\n    table_name: str,\n    set_df: pd.DataFrame,\n    id_property: str = \"trip_id\",\n    properties: list[str] = None,\n):\n    \"\"\"Set property values in a specific table for a list of IDs.\n\n    Args:\n        table_name (str): Name of the table to modify.\n        set_df (pd.DataFrame): DataFrame with columns 'trip_id' and 'value' containing\n            trip IDs and values to set for the specified property.\n        id_property: Property to use as ID to set by. Defaults to \"trip_id.\n        properties: List of properties to set which are in set_df. If not specified, will set\n            all properties.\n    \"\"\"\n    table_df = self.get_table(table_name)\n    updated_df = update_df_by_col_value(table_df, set_df, id_property, properties=properties)\n    self.__dict__[table_name] = updated_df\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.FeedValidationError","title":"FeedValidationError","text":"

Bases: Exception

Raised when there is an issue with the validation of the GTFS data.

Source code in network_wrangler/transit/feed/feed.py
class FeedValidationError(Exception):\n    \"\"\"Raised when there is an issue with the validation of the GTFS data.\"\"\"\n\n    pass\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.merge_shapes_to_stop_times","title":"merge_shapes_to_stop_times(stop_times, shapes, trips)","text":"

Add shape_id and shape_pt_sequence to stop_times dataframe.

Parameters:

Name Type Description Default stop_times DataFrame[WranglerStopTimesTable]

stop_times dataframe to add shape_id and shape_pt_sequence to.

required shapes DataFrame[WranglerShapesTable]

shapes dataframe to add to stop_times.

required trips DataFrame[TripsTable]

trips dataframe to link stop_times to shapes

required

Returns:

Type Description DataFrame[WranglerStopTimesTable]

stop_times dataframe with shape_id and shape_pt_sequence added.

Source code in network_wrangler/transit/feed/feed.py
def merge_shapes_to_stop_times(\n    stop_times: DataFrame[WranglerStopTimesTable],\n    shapes: DataFrame[WranglerShapesTable],\n    trips: DataFrame[TripsTable],\n) -> DataFrame[WranglerStopTimesTable]:\n    \"\"\"Add shape_id and shape_pt_sequence to stop_times dataframe.\n\n    Args:\n        stop_times: stop_times dataframe to add shape_id and shape_pt_sequence to.\n        shapes: shapes dataframe to add to stop_times.\n        trips: trips dataframe to link stop_times to shapes\n\n    Returns:\n        stop_times dataframe with shape_id and shape_pt_sequence added.\n    \"\"\"\n    stop_times_w_shape_id = stop_times.merge(\n        trips[[\"trip_id\", \"shape_id\"]], on=\"trip_id\", how=\"left\"\n    )\n\n    stop_times_w_shapes = stop_times_w_shape_id.merge(\n        shapes,\n        how=\"left\",\n        left_on=[\"shape_id\", \"model_node_id\"],\n        right_on=[\"shape_id\", \"shape_model_node_id\"],\n    )\n    stop_times_w_shapes = stop_times_w_shapes.drop(columns=[\"shape_model_node_id\"])\n    return stop_times_w_shapes\n
"},{"location":"data_models/#network_wrangler.transit.feed.feed.stop_count_by_trip","title":"stop_count_by_trip(stop_times)","text":"

Returns dataframe with trip_id and stop_count from stop_times.

Source code in network_wrangler/transit/feed/feed.py
def stop_count_by_trip(\n    stop_times: DataFrame[WranglerStopTimesTable],\n) -> pd.DataFrame:\n    \"\"\"Returns dataframe with trip_id and stop_count from stop_times.\"\"\"\n    stops_count = stop_times.groupby(\"trip_id\").size()\n    return stops_count.reset_index(name=\"stop_count\")\n
"},{"location":"data_models/#pure-gtfs-tables","title":"Pure GTFS Tables","text":"

Models for when you want to use vanilla (non wrangler) GTFS.

"},{"location":"data_models/#network_wrangler.models.gtfs.AgencyRecord","title":"AgencyRecord","text":"

Bases: BaseModel

Represents a transit agency.

Source code in network_wrangler/models/gtfs/records.py
class AgencyRecord(BaseModel):\n    \"\"\"Represents a transit agency.\"\"\"\n\n    agency_id: AgencyID\n    agency_name: Optional[AgencyName]\n    agency_url: Optional[HttpUrl]\n    agency_timezone: Timezone\n    agency_lang: Optional[Language]\n    agency_phone: Optional[AgencyPhone]\n    agency_fare_url: Optional[AgencyFareUrl]\n    agency_email: Optional[AgencyEmail]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.BikesAllowed","title":"BikesAllowed","text":"

Bases: IntEnum

Indicates whether bicycles are allowed.

Source code in network_wrangler/models/gtfs/types.py
class BikesAllowed(IntEnum):\n    \"\"\"Indicates whether bicycles are allowed.\"\"\"\n\n    NO_INFORMATION = 0\n    ALLOWED = 1\n    NOT_ALLOWED = 2\n
"},{"location":"data_models/#network_wrangler.models.gtfs.DirectionID","title":"DirectionID","text":"

Bases: IntEnum

Indicates the direction of travel for a trip.

Source code in network_wrangler/models/gtfs/types.py
class DirectionID(IntEnum):\n    \"\"\"Indicates the direction of travel for a trip.\"\"\"\n\n    OUTBOUND = 0\n    INBOUND = 1\n
"},{"location":"data_models/#network_wrangler.models.gtfs.FrequencyRecord","title":"FrequencyRecord","text":"

Bases: BaseModel

Represents headway (time between trips) for routes with variable frequency.

Source code in network_wrangler/models/gtfs/records.py
class FrequencyRecord(BaseModel):\n    \"\"\"Represents headway (time between trips) for routes with variable frequency.\"\"\"\n\n    trip_id: TripID\n    start_time: StartTime\n    end_time: EndTime\n    headway_secs: HeadwaySecs\n
"},{"location":"data_models/#network_wrangler.models.gtfs.LocationType","title":"LocationType","text":"

Bases: IntEnum

Indicates the type of node the stop record represents.

Full documentation: https://gtfs.org/schedule/reference/#stopstxt

Source code in network_wrangler/models/gtfs/types.py
class LocationType(IntEnum):\n    \"\"\"Indicates the type of node the stop record represents.\n\n    Full documentation: https://gtfs.org/schedule/reference/#stopstxt\n    \"\"\"\n\n    STOP_PLATFORM = 0\n    STATION = 1\n    ENTRANCE_EXIT = 2\n    GENERIC_NODE = 3\n    BOARDING_AREA = 4\n
"},{"location":"data_models/#network_wrangler.models.gtfs.MockPaModel","title":"MockPaModel","text":"

Mock model for when Pandera is not installed.

Source code in network_wrangler/models/gtfs/__init__.py
class MockPaModel:\n    \"\"\"Mock model for when Pandera is not installed.\"\"\"\n\n    def __init__(self, **kwargs):\n        \"\"\"Mock modle initiation.\"\"\"\n        for key, value in kwargs.items():\n            setattr(self, key, value)\n
"},{"location":"data_models/#network_wrangler.models.gtfs.MockPaModel.__init__","title":"__init__(**kwargs)","text":"

Mock modle initiation.

Source code in network_wrangler/models/gtfs/__init__.py
def __init__(self, **kwargs):\n    \"\"\"Mock modle initiation.\"\"\"\n    for key, value in kwargs.items():\n        setattr(self, key, value)\n
"},{"location":"data_models/#network_wrangler.models.gtfs.PickupDropoffType","title":"PickupDropoffType","text":"

Bases: IntEnum

Indicates the pickup method for passengers at a stop.

Full documentation: https://gtfs.org/schedule/reference

Source code in network_wrangler/models/gtfs/types.py
class PickupDropoffType(IntEnum):\n    \"\"\"Indicates the pickup method for passengers at a stop.\n\n    Full documentation: https://gtfs.org/schedule/reference\n    \"\"\"\n\n    REGULAR = 0\n    NONE = 1\n    PHONE_AGENCY = 2\n    COORDINATE_WITH_DRIVER = 3\n
"},{"location":"data_models/#network_wrangler.models.gtfs.RouteRecord","title":"RouteRecord","text":"

Bases: BaseModel

Represents a transit route.

Source code in network_wrangler/models/gtfs/records.py
class RouteRecord(BaseModel):\n    \"\"\"Represents a transit route.\"\"\"\n\n    route_id: RouteID\n    agency_id: AgencyID\n    route_type: RouteType\n    route_short_name: RouteShortName\n    route_long_name: RouteLongName\n\n    # Optional\n    route_desc: Optional[RouteDesc]\n    route_url: Optional[RouteUrl]\n    route_color: Optional[RouteColor]\n    route_text_color: Optional[RouteTextColor]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.RouteType","title":"RouteType","text":"

Bases: IntEnum

Indicates the type of transportation used on a route.

Full documentation: https://gtfs.org/schedule/reference

Source code in network_wrangler/models/gtfs/types.py
class RouteType(IntEnum):\n    \"\"\"Indicates the type of transportation used on a route.\n\n    Full documentation: https://gtfs.org/schedule/reference\n    \"\"\"\n\n    TRAM = 0\n    SUBWAY = 1\n    RAIL = 2\n    BUS = 3\n    FERRY = 4\n    CABLE_TRAM = 5\n    AERIAL_LIFT = 6\n    FUNICULAR = 7\n    TROLLEYBUS = 11\n    MONORAIL = 12\n
"},{"location":"data_models/#network_wrangler.models.gtfs.ShapeRecord","title":"ShapeRecord","text":"

Bases: BaseModel

Represents a point on a path (shape) that a transit vehicle takes.

Source code in network_wrangler/models/gtfs/records.py
class ShapeRecord(BaseModel):\n    \"\"\"Represents a point on a path (shape) that a transit vehicle takes.\"\"\"\n\n    shape_id: ShapeID\n    shape_pt_lat: Latitude\n    shape_pt_lon: Longitude\n    shape_pt_sequence: ShapePtSequence\n\n    # Optional\n    shape_dist_traveled: Optional[ShapeDistTraveled]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.StopRecord","title":"StopRecord","text":"

Bases: BaseModel

Represents a stop or station where vehicles pick up or drop off passengers.

Source code in network_wrangler/models/gtfs/records.py
class StopRecord(BaseModel):\n    \"\"\"Represents a stop or station where vehicles pick up or drop off passengers.\"\"\"\n\n    stop_id: StopID\n    stop_lat: Latitude\n    stop_lon: Longitude\n\n    # Optional\n    stop_code: Optional[StopCode]\n    stop_name: Optional[StopName]\n    tts_stop_name: Optional[TTSStopName]\n    stop_desc: Optional[StopDesc]\n    zone_id: Optional[ZoneID]\n    stop_url: Optional[StopUrl]\n    location_type: Optional[LocationType]\n    parent_station: Optional[ParentStation]\n    stop_timezone: Optional[Timezone]\n    wheelchair_boarding: Optional[WheelchairAccessible]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.StopTimeRecord","title":"StopTimeRecord","text":"

Bases: BaseModel

Times that a vehicle arrives at and departs from stops for each trip.

Source code in network_wrangler/models/gtfs/records.py
class StopTimeRecord(BaseModel):\n    \"\"\"Times that a vehicle arrives at and departs from stops for each trip.\"\"\"\n\n    trip_id: TripID\n    arrival_time: ArrivalTime\n    departure_time: DepartureTime\n    stop_id: StopID\n    stop_sequence: StopSequence\n\n    # Optional\n    stop_headsign: Optional[StopHeadsign]\n    pickup_type: Optional[PickupType]\n    drop_off_type: Optional[DropoffType]\n    shape_dist_traveled: Optional[ShapeDistTraveled]\n    timepoint: Optional[Timepoint]\n
"},{"location":"data_models/#network_wrangler.models.gtfs.TimepointType","title":"TimepointType","text":"

Bases: IntEnum

Indicates whether the specified time is exact or approximate.

Full documentation: https://gtfs.org/schedule/reference

Source code in network_wrangler/models/gtfs/types.py
class TimepointType(IntEnum):\n    \"\"\"Indicates whether the specified time is exact or approximate.\n\n    Full documentation: https://gtfs.org/schedule/reference\n    \"\"\"\n\n    APPROXIMATE = 0\n    EXACT = 1\n
"},{"location":"data_models/#network_wrangler.models.gtfs.TripRecord","title":"TripRecord","text":"

Bases: BaseModel

Describes trips which are sequences of two or more stops that occur at specific time.

Source code in network_wrangler/models/gtfs/records.py
class TripRecord(BaseModel):\n    \"\"\"Describes trips which are sequences of two or more stops that occur at specific time.\"\"\"\n\n    route_id: RouteID\n    service_id: ServiceID\n    trip_id: TripID\n    trip_headsign: TripHeadsign\n    trip_short_name: TripShortName\n    direction_id: DirectionID\n    block_id: BlockID\n    shape_id: ShapeID\n    wheelchair_accessible: WheelchairAccessible\n    bikes_allowed: BikesAllowed\n
"},{"location":"data_models/#network_wrangler.models.gtfs.WheelchairAccessible","title":"WheelchairAccessible","text":"

Bases: IntEnum

Indicates whether the trip is wheelchair accessible.

Full documentation: https://gtfs.org/schedule/reference

Source code in network_wrangler/models/gtfs/types.py
class WheelchairAccessible(IntEnum):\n    \"\"\"Indicates whether the trip is wheelchair accessible.\n\n    Full documentation: https://gtfs.org/schedule/reference\n    \"\"\"\n\n    NO_INFORMATION = 0\n    POSSIBLE = 1\n    NOT_POSSIBLE = 2\n
"},{"location":"data_models/#network_wrangler.models.gtfs.WranglerShapeRecord","title":"WranglerShapeRecord","text":"

Bases: ShapeRecord

Wrangler-flavored ShapeRecord.

Source code in network_wrangler/models/gtfs/records.py
class WranglerShapeRecord(ShapeRecord):\n    \"\"\"Wrangler-flavored ShapeRecord.\"\"\"\n\n    shape_model_node_id: int\n
"},{"location":"data_models/#network_wrangler.models.gtfs.WranglerStopRecord","title":"WranglerStopRecord","text":"

Bases: StopRecord

Wrangler-flavored StopRecord.

Source code in network_wrangler/models/gtfs/records.py
class WranglerStopRecord(StopRecord):\n    \"\"\"Wrangler-flavored StopRecord.\"\"\"\n\n    trip_id: TripID\n
"},{"location":"data_models/#network_wrangler.models.gtfs.WranglerStopTimeRecord","title":"WranglerStopTimeRecord","text":"

Bases: StopTimeRecord

Wrangler-flavored StopTimeRecord.

Source code in network_wrangler/models/gtfs/records.py
class WranglerStopTimeRecord(StopTimeRecord):\n    \"\"\"Wrangler-flavored StopTimeRecord.\"\"\"\n\n    model_node_id: int\n\n    model_config = ConfigDict(\n        protected_namespaces=(),\n    )\n
"},{"location":"data_models/#project-cards","title":"Project Cards","text":""},{"location":"data_models/#projects","title":"Projects","text":"

For roadway deletion project card (e.g. to delete).

Pydantic models for roadway property changes which align with ProjectCard schemas.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_deletion.RoadwayDeletion","title":"RoadwayDeletion","text":"

Bases: RecordModel

Requirements for describing roadway deletion project card (e.g. to delete).

Source code in network_wrangler/models/projects/roadway_deletion.py
class RoadwayDeletion(RecordModel):\n    \"\"\"Requirements for describing roadway deletion project card (e.g. to delete).\"\"\"\n\n    require_any_of: ClassVar[AnyOf] = [[\"links\", \"nodes\"]]\n    model_config = ConfigDict(extra=\"forbid\")\n\n    links: Optional[SelectLinksDict] = None\n    nodes: Optional[SelectNodesDict] = None\n    clean_shapes: Optional[bool] = False\n    clean_nodes: Optional[bool] = False\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.GroupedScopedPropertySetItem","title":"GroupedScopedPropertySetItem","text":"

Bases: BaseModel

Value for setting property value for a single time of day and category.

Source code in network_wrangler/models/projects/roadway_property_change.py
class GroupedScopedPropertySetItem(BaseModel):\n    \"\"\"Value for setting property value for a single time of day and category.\"\"\"\n\n    model_config = ConfigDict(extra=\"forbid\", exclude_none=True)\n\n    category: Optional[Union[str, int]] = None\n    timespan: Optional[TimespanString] = None\n    categories: Optional[list[Any]] = []\n    timespans: Optional[list[TimespanString]] = []\n    set: Optional[Any] = None\n    existing: Optional[Any] = None\n    change: Optional[Union[int, float]] = None\n    _examples = [\n        {\"category\": \"hov3\", \"timespan\": [\"6:00\", \"9:00\"], \"set\": 2.0},\n        {\"category\": \"hov2\", \"set\": 2.0},\n        {\"timespan\": [\"12:00\", \"2:00\"], \"change\": -1},\n    ]\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_set_or_change(cls, data: dict):\n        \"\"\"Validate that each item has a set or change value.\"\"\"\n        if not isinstance(data, dict):\n            return data\n        if \"set\" in data and \"change\" in data:\n            WranglerLogger.warning(\"Both set and change are set. Ignoring change.\")\n            data[\"change\"] = None\n        return data\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_categories_or_timespans(cls, data: Any) -> Any:\n        \"\"\"Validate that each item has a category or timespan value.\"\"\"\n        if not isinstance(data, dict):\n            return data\n        require_any_of = [\"category\", \"timespan\", \"categories\", \"timespans\"]\n        if not any([attr in data for attr in require_any_of]):\n            raise ValidationError(f\"Require at least one of {require_any_of}\")\n        return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.GroupedScopedPropertySetItem.check_categories_or_timespans","title":"check_categories_or_timespans(data) classmethod","text":"

Validate that each item has a category or timespan value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_categories_or_timespans(cls, data: Any) -> Any:\n    \"\"\"Validate that each item has a category or timespan value.\"\"\"\n    if not isinstance(data, dict):\n        return data\n    require_any_of = [\"category\", \"timespan\", \"categories\", \"timespans\"]\n    if not any([attr in data for attr in require_any_of]):\n        raise ValidationError(f\"Require at least one of {require_any_of}\")\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.GroupedScopedPropertySetItem.check_set_or_change","title":"check_set_or_change(data) classmethod","text":"

Validate that each item has a set or change value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_set_or_change(cls, data: dict):\n    \"\"\"Validate that each item has a set or change value.\"\"\"\n    if not isinstance(data, dict):\n        return data\n    if \"set\" in data and \"change\" in data:\n        WranglerLogger.warning(\"Both set and change are set. Ignoring change.\")\n        data[\"change\"] = None\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.IndivScopedPropertySetItem","title":"IndivScopedPropertySetItem","text":"

Bases: BaseModel

Value for setting property value for a single time of day and category.

Source code in network_wrangler/models/projects/roadway_property_change.py
class IndivScopedPropertySetItem(BaseModel):\n    \"\"\"Value for setting property value for a single time of day and category.\"\"\"\n\n    model_config = ConfigDict(extra=\"forbid\", exclude_none=True)\n\n    category: Optional[Union[str, int]] = DEFAULT_CATEGORY\n    timespan: Optional[TimespanString] = DEFAULT_TIMESPAN\n    set: Optional[Any] = None\n    existing: Optional[Any] = None\n    change: Optional[Union[int, float]] = None\n    _examples = [\n        {\"category\": \"hov3\", \"timespan\": [\"6:00\", \"9:00\"], \"set\": 2.0},\n        {\"category\": \"hov2\", \"set\": 2.0},\n        {\"timespan\": [\"12:00\", \"2:00\"], \"change\": -1},\n    ]\n\n    @property\n    def timespan_dt(self) -> list[list[datetime]]:\n        \"\"\"Convert timespan to list of datetime objects.\"\"\"\n        return str_to_time_list(self.timespan)\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_set_or_change(cls, data: dict):\n        \"\"\"Validate that each item has a set or change value.\"\"\"\n        if not isinstance(data, dict):\n            return data\n        if data.get(\"set\") and data.get(\"change\"):\n            WranglerLogger.warning(\"Both set and change are set. Ignoring change.\")\n            data[\"change\"] = None\n\n        WranglerLogger.debug(f\"Data: {data}\")\n        if data.get(\"set\", None) is None and data.get(\"change\", None) is None:\n            WranglerLogger.debug(\n                f\"Must have `set` or `change` in IndivScopedPropertySetItem. \\\n                           Found: {data}\"\n            )\n            raise ValueError(\"Must have `set` or `change` in IndivScopedPropertySetItem\")\n        return data\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_categories_or_timespans(cls, data: Any) -> Any:\n        \"\"\"Validate that each item has a category or timespan value.\"\"\"\n        if not isinstance(data, dict):\n            return data\n        require_any_of = [\"category\", \"timespan\"]\n        if not any([attr in data for attr in require_any_of]):\n            raise ValidationError(f\"Require at least one of {require_any_of}\")\n        return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.IndivScopedPropertySetItem.timespan_dt","title":"timespan_dt: list[list[datetime]] property","text":"

Convert timespan to list of datetime objects.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.IndivScopedPropertySetItem.check_categories_or_timespans","title":"check_categories_or_timespans(data) classmethod","text":"

Validate that each item has a category or timespan value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_categories_or_timespans(cls, data: Any) -> Any:\n    \"\"\"Validate that each item has a category or timespan value.\"\"\"\n    if not isinstance(data, dict):\n        return data\n    require_any_of = [\"category\", \"timespan\"]\n    if not any([attr in data for attr in require_any_of]):\n        raise ValidationError(f\"Require at least one of {require_any_of}\")\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.IndivScopedPropertySetItem.check_set_or_change","title":"check_set_or_change(data) classmethod","text":"

Validate that each item has a set or change value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_set_or_change(cls, data: dict):\n    \"\"\"Validate that each item has a set or change value.\"\"\"\n    if not isinstance(data, dict):\n        return data\n    if data.get(\"set\") and data.get(\"change\"):\n        WranglerLogger.warning(\"Both set and change are set. Ignoring change.\")\n        data[\"change\"] = None\n\n    WranglerLogger.debug(f\"Data: {data}\")\n    if data.get(\"set\", None) is None and data.get(\"change\", None) is None:\n        WranglerLogger.debug(\n            f\"Must have `set` or `change` in IndivScopedPropertySetItem. \\\n                       Found: {data}\"\n        )\n        raise ValueError(\"Must have `set` or `change` in IndivScopedPropertySetItem\")\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.NodeGeometryChange","title":"NodeGeometryChange","text":"

Bases: RecordModel

Value for setting node geometry given a model_node_id.

Source code in network_wrangler/models/projects/roadway_property_change.py
class NodeGeometryChange(RecordModel):\n    \"\"\"Value for setting node geometry given a model_node_id.\"\"\"\n\n    model_config = ConfigDict(extra=\"ignore\")\n    X: float\n    Y: float\n    in_crs: Optional[int] = LAT_LON_CRS\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.NodeGeometryChangeTable","title":"NodeGeometryChangeTable","text":"

Bases: DataFrameModel

DataFrameModel for setting node geometry given a model_node_id.

Source code in network_wrangler/models/projects/roadway_property_change.py
class NodeGeometryChangeTable(DataFrameModel):\n    \"\"\"DataFrameModel for setting node geometry given a model_node_id.\"\"\"\n\n    model_node_id: Series[int]\n    X: Series[float] = Field(coerce=True)\n    Y: Series[float] = Field(coerce=True)\n    in_crs: Series[int] = Field(default=LAT_LON_CRS)\n\n    class Config:\n        \"\"\"Config for NodeGeometryChangeTable.\"\"\"\n\n        add_missing_columns = True\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.NodeGeometryChangeTable.Config","title":"Config","text":"

Config for NodeGeometryChangeTable.

Source code in network_wrangler/models/projects/roadway_property_change.py
class Config:\n    \"\"\"Config for NodeGeometryChangeTable.\"\"\"\n\n    add_missing_columns = True\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.RoadPropertyChange","title":"RoadPropertyChange","text":"

Bases: RecordModel

Value for setting property value for a time of day and category.

Source code in network_wrangler/models/projects/roadway_property_change.py
class RoadPropertyChange(RecordModel):\n    \"\"\"Value for setting property value for a time of day and category.\"\"\"\n\n    model_config = ConfigDict(extra=\"forbid\", exclude_none=True)\n\n    existing: Optional[Any] = None\n    change: Optional[Union[int, float]] = None\n    set: Optional[Any] = None\n    scoped: Optional[Union[None, ScopedPropertySetList]] = None\n\n    require_one_of: ClassVar[OneOf] = [[\"change\", \"set\"]]\n\n    _examples = [\n        {\"set\": 1},\n        {\"existing\": 2, \"change\": -1},\n        {\n            \"set\": 0,\n            \"scoped\": [\n                {\"timespan\": [\"6:00\", \"9:00\"], \"value\": 2.0},\n                {\"timespan\": [\"9:00\", \"15:00\"], \"value\": 4.0},\n            ],\n        },\n        {\n            \"set\": 0,\n            \"scoped\": [\n                {\n                    \"categories\": [\"hov3\", \"hov2\"],\n                    \"timespan\": [\"6:00\", \"9:00\"],\n                    \"value\": 2.0,\n                },\n                {\"category\": \"truck\", \"timespan\": [\"6:00\", \"9:00\"], \"value\": 4.0},\n            ],\n        },\n        {\n            \"set\": 0,\n            \"scoped\": [\n                {\"categories\": [\"hov3\", \"hov2\"], \"value\": 2.0},\n                {\"category\": \"truck\", \"value\": 4.0},\n            ],\n        },\n    ]\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopeConflictError","title":"ScopeConflictError","text":"

Bases: Exception

Raised when there is a scope conflict in a list of ScopedPropertySetItems.

Source code in network_wrangler/models/projects/roadway_property_change.py
class ScopeConflictError(Exception):\n    \"\"\"Raised when there is a scope conflict in a list of ScopedPropertySetItems.\"\"\"\n\n    pass\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList","title":"ScopedPropertySetList","text":"

Bases: RootListMixin, RootModel

List of ScopedPropertySetItems used to evaluate and apply changes to roadway properties.

Source code in network_wrangler/models/projects/roadway_property_change.py
class ScopedPropertySetList(RootListMixin, RootModel):\n    \"\"\"List of ScopedPropertySetItems used to evaluate and apply changes to roadway properties.\"\"\"\n\n    root: list[IndivScopedPropertySetItem]\n\n    @model_validator(mode=\"before\")\n    @classmethod\n    def check_set_or_change(cls, data: list):\n        \"\"\"Validate that each item has a set or change value.\"\"\"\n        data = _grouped_to_indiv_list_of_scopedpropsetitem(data)\n        return data\n\n    @model_validator(mode=\"after\")\n    def check_conflicting_scopes(self):\n        \"\"\"Check for conflicting scopes in the list of ScopedPropertySetItem.\"\"\"\n        conflicts = []\n        for i in self:\n            if i.timespan == DEFAULT_TIMESPAN:\n                continue\n            overlapping_ts_i = self.overlapping_timespans(i.timespan)\n            for j in overlapping_ts_i:\n                if j == i:\n                    continue\n                if j.category == i.category:\n                    conflicts.append((i, j))\n        if conflicts:\n            WranglerLogger.error(\"Found conflicting scopes in ScopedPropertySetList:\\n{conflicts}\")\n            raise ScopeConflictError(\"Conflicting scopes in ScopedPropertySetList\")\n\n        return self\n\n    def overlapping_timespans(self, timespan: TimespanString) -> list[IndivScopedPropertySetItem]:\n        \"\"\"Return a list of items that overlap with the given timespan.\"\"\"\n        timespan_dt = str_to_time_list(timespan)\n        return [i for i in self if dt_overlaps(i.timespan_dt, timespan_dt)]\n\n    @property\n    def change_items(self) -> list[IndivScopedPropertySetItem]:\n        \"\"\"Filter out items that do not have a change value.\"\"\"\n        WranglerLogger.debug(f\"self.root[0]: {self.root[0]}\")\n        return [i for i in self if i.change is not None]\n\n    @property\n    def set_items(self):\n        \"\"\"Filter out items that do not have a set value.\"\"\"\n        return [i for i in self if i.set is not None]\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.change_items","title":"change_items: list[IndivScopedPropertySetItem] property","text":"

Filter out items that do not have a change value.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.set_items","title":"set_items property","text":"

Filter out items that do not have a set value.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.check_conflicting_scopes","title":"check_conflicting_scopes()","text":"

Check for conflicting scopes in the list of ScopedPropertySetItem.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"after\")\ndef check_conflicting_scopes(self):\n    \"\"\"Check for conflicting scopes in the list of ScopedPropertySetItem.\"\"\"\n    conflicts = []\n    for i in self:\n        if i.timespan == DEFAULT_TIMESPAN:\n            continue\n        overlapping_ts_i = self.overlapping_timespans(i.timespan)\n        for j in overlapping_ts_i:\n            if j == i:\n                continue\n            if j.category == i.category:\n                conflicts.append((i, j))\n    if conflicts:\n        WranglerLogger.error(\"Found conflicting scopes in ScopedPropertySetList:\\n{conflicts}\")\n        raise ScopeConflictError(\"Conflicting scopes in ScopedPropertySetList\")\n\n    return self\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.check_set_or_change","title":"check_set_or_change(data) classmethod","text":"

Validate that each item has a set or change value.

Source code in network_wrangler/models/projects/roadway_property_change.py
@model_validator(mode=\"before\")\n@classmethod\ndef check_set_or_change(cls, data: list):\n    \"\"\"Validate that each item has a set or change value.\"\"\"\n    data = _grouped_to_indiv_list_of_scopedpropsetitem(data)\n    return data\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_property_change.ScopedPropertySetList.overlapping_timespans","title":"overlapping_timespans(timespan)","text":"

Return a list of items that overlap with the given timespan.

Source code in network_wrangler/models/projects/roadway_property_change.py
def overlapping_timespans(self, timespan: TimespanString) -> list[IndivScopedPropertySetItem]:\n    \"\"\"Return a list of items that overlap with the given timespan.\"\"\"\n    timespan_dt = str_to_time_list(timespan)\n    return [i for i in self if dt_overlaps(i.timespan_dt, timespan_dt)]\n
"},{"location":"data_models/#roadway-selections","title":"Roadway Selections","text":"

Pydantic Roadway Selection Models which should align with ProjectCard data models.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectFacility","title":"SelectFacility","text":"

Bases: RecordModel

Roadway Facility Selection.

Source code in network_wrangler/models/projects/roadway_selection.py
class SelectFacility(RecordModel):\n    \"\"\"Roadway Facility Selection.\"\"\"\n\n    require_one_of: ClassVar[OneOf] = [[\"links\", \"nodes\", [\"links\", \"from\", \"to\"]]]\n    model_config = ConfigDict(extra=\"forbid\")\n\n    links: Optional[SelectLinksDict] = None\n    nodes: Optional[SelectNodesDict] = None\n    from_: Annotated[Optional[SelectNodeDict], Field(None, alias=\"from\")]\n    to: Optional[SelectNodeDict] = None\n\n    _examples = [\n        {\n            \"links\": {\"name\": [\"Main Street\"]},\n            \"from\": {\"model_node_id\": 1},\n            \"to\": {\"model_node_id\": 2},\n        },\n        {\"nodes\": {\"osm_node_id\": [\"1\", \"2\", \"3\"]}},\n        {\"nodes\": {\"model_node_id\": [1, 2, 3]}},\n        {\"links\": {\"model_link_id\": [1, 2, 3]}},\n    ]\n\n    @property\n    def feature_types(self) -> str:\n        \"\"\"One of `segment`, `links`, or `nodes`.\"\"\"\n        if self.links and self.from_ and self.to:\n            return \"segment\"\n        if self.links:\n            return \"links\"\n        if self.nodes:\n            return \"nodes\"\n        raise ValueError(\"SelectFacility must have either links or nodes defined.\")\n\n    @property\n    def selection_type(self) -> str:\n        \"\"\"One of `segment`, `links`, or `nodes`.\"\"\"\n        if self.feature_types == \"segment\":\n            return \"segment\"\n        if self.feature_types == \"links\":\n            return self.links.selection_type\n        if self.feature_types == \"nodes\":\n            return self.nodes.selection_type\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectFacility.feature_types","title":"feature_types: str property","text":"

One of segment, links, or nodes.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectFacility.selection_type","title":"selection_type: str property","text":"

One of segment, links, or nodes.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict","title":"SelectLinksDict","text":"

Bases: RecordModel

requirements for describing links in the facility section of a project card.

Examples:

    {'name': ['Main St'], 'modes': ['drive']}\n    {'osm_link_id': ['123456789']}\n    {'model_link_id': [123456789], 'modes': ['walk']}\n    {'all': 'True', 'modes': ['transit']}\n
Source code in network_wrangler/models/projects/roadway_selection.py
class SelectLinksDict(RecordModel):\n    \"\"\"requirements for describing links in the `facility` section of a project card.\n\n    Examples:\n    ```python\n        {'name': ['Main St'], 'modes': ['drive']}\n        {'osm_link_id': ['123456789']}\n        {'model_link_id': [123456789], 'modes': ['walk']}\n        {'all': 'True', 'modes': ['transit']}\n    ```\n\n    \"\"\"\n\n    require_conflicts: ClassVar[ConflictsWith] = [\n        [\"all\", \"osm_link_id\"],\n        [\"all\", \"model_link_id\"],\n        [\"all\", \"name\"],\n        [\"all\", \"ref\"],\n        [\"osm_link_id\", \"model_link_id\"],\n        [\"osm_link_id\", \"name\"],\n        [\"model_link_id\", \"name\"],\n    ]\n    require_any_of: ClassVar[AnyOf] = [[\"name\", \"ref\", \"osm_link_id\", \"model_link_id\", \"all\"]]\n    _initial_selection_fields: ClassVar[list[str]] = [\n        \"name\",\n        \"ref\",\n        \"osm_link_id\",\n        \"model_link_id\",\n        \"all\",\n    ]\n    _explicit_id_fields: ClassVar[list[str]] = [\"osm_link_id\", \"model_link_id\"]\n    _segment_id_fields: ClassVar[list[str]] = [\n        \"name\",\n        \"ref\",\n        \"osm_link_id\",\n        \"model_link_id\",\n        \"modes\",\n    ]\n    _special_fields: ClassVar[list[str]] = [\"modes\", \"ignore_missing\"]\n    model_config = ConfigDict(extra=\"allow\")\n\n    all: Optional[bool] = False\n    name: Annotated[Optional[list[str]], Field(None, min_length=1)]\n    ref: Annotated[Optional[list[str]], Field(None, min_length=1)]\n    osm_link_id: Annotated[Optional[list[str]], Field(None, min_length=1)]\n    model_link_id: Annotated[Optional[list[int]], Field(None, min_length=1)]\n    modes: list[str] = DEFAULT_SEARCH_MODES\n    ignore_missing: Optional[bool] = True\n\n    _examples = [\n        {\"name\": [\"Main St\"], \"modes\": [\"drive\"]},\n        {\"osm_link_id\": [\"123456789\"]},\n        {\"model_link_id\": [123456789], \"modes\": [\"walk\"]},\n        {\"all\": \"True\", \"modes\": [\"transit\"]},\n    ]\n\n    @property\n    def asdict(self) -> dict:\n        \"\"\"Model as a dictionary.\"\"\"\n        return self.model_dump(exclude_none=True, by_alias=True)\n\n    @property\n    def fields(self) -> list[str]:\n        \"\"\"All fields in the selection.\"\"\"\n        return list(self.model_dump(exclude_none=True, by_alias=True).keys())\n\n    @property\n    def initial_selection_fields(self) -> list[str]:\n        \"\"\"Fields used in the initial selection of links.\"\"\"\n        if self.all:\n            return [\"all\"]\n        return [f for f in self._initial_selection_fields if getattr(self, f)]\n\n    @property\n    def explicit_id_fields(self) -> list[str]:\n        \"\"\"Fields that can be used in a slection on their own.\n\n        e.g. `osm_link_id` and `model_link_id`.\n        \"\"\"\n        return [k for k in self._explicit_id_fields if getattr(self, k)]\n\n    @property\n    def segment_id_fields(self) -> list[str]:\n        \"\"\"Fields that can be used in an intial segment selection.\n\n        e.g. `name`, `ref`, `osm_link_id`, or `model_link_id`.\n        \"\"\"\n        return [k for k in self._segment_id_fields if getattr(self, k)]\n\n    @property\n    def additional_selection_fields(self):\n        \"\"\"Return a list of fields that are not part of the initial selection fields.\"\"\"\n        _potential = list(\n            set(self.fields) - set(self.initial_selection_fields) - set(self._special_fields)\n        )\n        return [f for f in _potential if getattr(self, f)]\n\n    @property\n    def selection_type(self):\n        \"\"\"One of `all`, `explicit_ids`, or `segment`.\"\"\"\n        if self.all:\n            return \"all\"\n        if self.explicit_id_fields:\n            return \"explicit_ids\"\n        if self.segment_id_fields:\n            return \"segment\"\n        else:\n            raise SelectionFormatError(\n                \"If not a segment, Select Links should have either `all` or an explicit id.\"\n            )\n\n    @property\n    def explicit_id_selection_dict(self):\n        \"\"\"Return a dictionary of fields that are explicit ids.\"\"\"\n        return {k: v for k, v in self.asdict.items() if k in self.explicit_id_fields}\n\n    @property\n    def segment_selection_dict(self):\n        \"\"\"Return a dictionary of fields that are explicit ids.\"\"\"\n        return {k: v for k, v in self.asdict.items() if k in self.segment_id_fields}\n\n    @property\n    def additional_selection_dict(self):\n        \"\"\"Return a dictionary of fields that are not part of the initial selection fields.\"\"\"\n        return {k: v for k, v in self.asdict.items() if k in self.additional_selection_fields}\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.additional_selection_dict","title":"additional_selection_dict property","text":"

Return a dictionary of fields that are not part of the initial selection fields.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.additional_selection_fields","title":"additional_selection_fields property","text":"

Return a list of fields that are not part of the initial selection fields.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.asdict","title":"asdict: dict property","text":"

Model as a dictionary.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.explicit_id_fields","title":"explicit_id_fields: list[str] property","text":"

Fields that can be used in a slection on their own.

e.g. osm_link_id and model_link_id.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.explicit_id_selection_dict","title":"explicit_id_selection_dict property","text":"

Return a dictionary of fields that are explicit ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.fields","title":"fields: list[str] property","text":"

All fields in the selection.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.initial_selection_fields","title":"initial_selection_fields: list[str] property","text":"

Fields used in the initial selection of links.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.segment_id_fields","title":"segment_id_fields: list[str] property","text":"

Fields that can be used in an intial segment selection.

e.g. name, ref, osm_link_id, or model_link_id.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.segment_selection_dict","title":"segment_selection_dict property","text":"

Return a dictionary of fields that are explicit ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectLinksDict.selection_type","title":"selection_type property","text":"

One of all, explicit_ids, or segment.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict","title":"SelectNodeDict","text":"

Bases: RecordModel

Selection of a single roadway node in the facility section of a project card.

Source code in network_wrangler/models/projects/roadway_selection.py
class SelectNodeDict(RecordModel):\n    \"\"\"Selection of a single roadway node in the `facility` section of a project card.\"\"\"\n\n    require_one_of: ClassVar[OneOf] = [[\"osm_node_id\", \"model_node_id\"]]\n    initial_selection_fields: ClassVar[list[str]] = [\"osm_node_id\", \"model_node_id\"]\n    explicit_id_fields: ClassVar[list[str]] = [\"osm_node_id\", \"model_node_id\"]\n    model_config = ConfigDict(extra=\"allow\")\n\n    osm_node_id: Optional[str] = None\n    model_node_id: Optional[int] = None\n\n    _examples = [{\"osm_node_id\": \"12345\"}, {\"model_node_id\": 67890}]\n\n    @property\n    def selection_type(self):\n        \"\"\"One of `all` or `explicit_ids`.\"\"\"\n        _explicit_ids = [k for k in self.explicit_id_fields if getattr(self, k)]\n        if _explicit_ids:\n            return \"explicit_ids\"\n        WranglerLogger.debug(\n            f\"SelectNode should have an explicit id: {self.explicit_id_fields} \\\n                Found none in selection dict: \\n{self.model_dump(by_alias=True)}\"\n        )\n        raise SelectionFormatError(\"Select Node should have either `all` or an explicit id.\")\n\n    @property\n    def explicit_id_selection_dict(self) -> dict:\n        \"\"\"Return a dictionary of field that are explicit ids.\"\"\"\n        return {\n            k: [v]\n            for k, v in self.model_dump(exclude_none=True, by_alias=True).items()\n            if k in self.explicit_id_fields\n        }\n\n    @property\n    def additional_selection_fields(self) -> list[str]:\n        \"\"\"Return a list of fields that are not part of the initial selection fields.\"\"\"\n        return list(\n            set(self.model_dump(exclude_none=True, by_alias=True).keys())\n            - set(self.initial_selection_fields)\n        )\n\n    @property\n    def additional_selection_dict(self) -> dict:\n        \"\"\"Return a dictionary of fields that are not part of the initial selection fields.\"\"\"\n        return {\n            k: v\n            for k, v in self.model_dump(exclude_none=True, by_alias=True).items()\n            if k in self.additional_selection_fields\n        }\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict.additional_selection_dict","title":"additional_selection_dict: dict property","text":"

Return a dictionary of fields that are not part of the initial selection fields.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict.additional_selection_fields","title":"additional_selection_fields: list[str] property","text":"

Return a list of fields that are not part of the initial selection fields.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict.explicit_id_selection_dict","title":"explicit_id_selection_dict: dict property","text":"

Return a dictionary of field that are explicit ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodeDict.selection_type","title":"selection_type property","text":"

One of all or explicit_ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict","title":"SelectNodesDict","text":"

Bases: RecordModel

Requirements for describing multiple nodes of a project card (e.g. to delete).

Source code in network_wrangler/models/projects/roadway_selection.py
class SelectNodesDict(RecordModel):\n    \"\"\"Requirements for describing multiple nodes of a project card (e.g. to delete).\"\"\"\n\n    require_any_of: ClassVar[AnyOf] = [[\"osm_node_id\", \"model_node_id\"]]\n    _explicit_id_fields: ClassVar[list[str]] = [\"osm_node_id\", \"model_node_id\"]\n    model_config = ConfigDict(extra=\"forbid\")\n\n    all: Optional[bool] = False\n    osm_node_id: Annotated[Optional[list[str]], Field(None, min_length=1)]\n    model_node_id: Annotated[Optional[list[int]], Field(min_length=1)]\n    ignore_missing: Optional[bool] = True\n\n    _examples = [\n        {\"osm_node_id\": [\"12345\", \"67890\"], \"model_node_id\": [12345, 67890]},\n        {\"osm_node_id\": [\"12345\", \"67890\"]},\n        {\"model_node_id\": [12345, 67890]},\n    ]\n\n    @property\n    def asdict(self) -> dict:\n        \"\"\"Model as a dictionary.\"\"\"\n        return self.model_dump(exclude_none=True, by_alias=True)\n\n    @property\n    def fields(self) -> list[str]:\n        \"\"\"List of fields in the selection.\"\"\"\n        return list(self.model_dump(exclude_none=True, by_alias=True).keys())\n\n    @property\n    def selection_type(self):\n        \"\"\"One of `all` or `explicit_ids`.\"\"\"\n        if self.all:\n            return \"all\"\n        if self.explicit_id_fields:\n            return \"explicit_ids\"\n        WranglerLogger.debug(\n            f\"SelectNodes should have either `all` or an explicit id: {self.explicit_id_fields}. \\\n                Found neither in nodes selection: \\n{self.model_dump(by_alias=True)}\"\n        )\n        raise SelectionFormatError(\"Select Node should have either `all` or an explicit id.\")\n\n    @property\n    def explicit_id_fields(self) -> list[str]:\n        \"\"\"Fields which can be used in a selection on their own.\"\"\"\n        return [k for k in self._explicit_id_fields if getattr(self, k)]\n\n    @property\n    def explicit_id_selection_dict(self):\n        \"\"\"Return a dictionary of fields that are explicit ids.\"\"\"\n        return {k: v for k, v in self.asdict.items() if k in self.explicit_id_fields}\n
"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.asdict","title":"asdict: dict property","text":"

Model as a dictionary.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.explicit_id_fields","title":"explicit_id_fields: list[str] property","text":"

Fields which can be used in a selection on their own.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.explicit_id_selection_dict","title":"explicit_id_selection_dict property","text":"

Return a dictionary of fields that are explicit ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.fields","title":"fields: list[str] property","text":"

List of fields in the selection.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectNodesDict.selection_type","title":"selection_type property","text":"

One of all or explicit_ids.

"},{"location":"data_models/#network_wrangler.models.projects.roadway_selection.SelectionFormatError","title":"SelectionFormatError","text":"

Bases: Exception

Raised when there is an issue with the selection format.

Source code in network_wrangler/models/projects/roadway_selection.py
class SelectionFormatError(Exception):\n    \"\"\"Raised when there is an issue with the selection format.\"\"\"\n\n    pass\n
"},{"location":"data_models/#transit-selections","title":"Transit Selections","text":"

Pydantic data models for transit selection properties.

"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectRouteProperties","title":"SelectRouteProperties","text":"

Bases: RecordModel

Selection properties for transit routes.

Source code in network_wrangler/models/projects/transit_selection.py
class SelectRouteProperties(RecordModel):\n    \"\"\"Selection properties for transit routes.\"\"\"\n\n    route_short_name: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    route_long_name: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    agency_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    route_type: Annotated[Optional[List[int]], Field(None, min_length=1)]\n\n    model_config = ConfigDict(\n        extra=\"allow\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectTransitLinks","title":"SelectTransitLinks","text":"

Bases: RecordModel

Requirements for describing multiple transit links of a project card.

Source code in network_wrangler/models/projects/transit_selection.py
class SelectTransitLinks(RecordModel):\n    \"\"\"Requirements for describing multiple transit links of a project card.\"\"\"\n\n    require_one_of: ClassVar[OneOf] = [\n        [\"ab_nodes\", \"model_link_id\"],\n    ]\n\n    model_link_id: Annotated[Optional[List[int]], Field(min_length=1)] = None\n    ab_nodes: Annotated[Optional[List[TransitABNodesModel]], Field(min_length=1)] = None\n    require: Optional[SelectionRequire] = \"any\"\n\n    model_config = ConfigDict(\n        extra=\"forbid\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n    _examples = [\n        {\n            \"ab_nodes\": [{\"A\": \"75520\", \"B\": \"66380\"}, {\"A\": \"66380\", \"B\": \"75520\"}],\n            \"type\": \"any\",\n        },\n        {\n            \"model_link_id\": [123, 321],\n            \"type\": \"all\",\n        },\n    ]\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectTransitNodes","title":"SelectTransitNodes","text":"

Bases: RecordModel

Requirements for describing multiple transit nodes of a project card (e.g. to delete).

Source code in network_wrangler/models/projects/transit_selection.py
class SelectTransitNodes(RecordModel):\n    \"\"\"Requirements for describing multiple transit nodes of a project card (e.g. to delete).\"\"\"\n\n    require_any_of: ClassVar[AnyOf] = [\n        [\n            # \"stop_id\", TODO Not implemented\n            \"model_node_id\",\n        ]\n    ]\n\n    # stop_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)] TODO Not implemented\n    model_node_id: Annotated[List[int], Field(min_length=1)]\n    require: Optional[SelectionRequire] = \"any\"\n\n    model_config = ConfigDict(\n        extra=\"forbid\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n\n    _examples = [\n        # {\"stop_id\": [\"stop1\", \"stop2\"], \"require\": \"any\"},  TODO Not implemented\n        {\"model_node_id\": [1, 2], \"require\": \"all\"},\n    ]\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectTransitTrips","title":"SelectTransitTrips","text":"

Bases: RecordModel

Selection properties for transit trips.

Source code in network_wrangler/models/projects/transit_selection.py
class SelectTransitTrips(RecordModel):\n    \"\"\"Selection properties for transit trips.\"\"\"\n\n    trip_properties: Optional[SelectTripProperties] = None\n    route_properties: Optional[SelectRouteProperties] = None\n    timespans: Annotated[Optional[List[TimespanString]], Field(None, min_length=1)]\n    nodes: Optional[SelectTransitNodes] = None\n    links: Optional[SelectTransitLinks] = None\n\n    model_config = ConfigDict(\n        extra=\"forbid\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.SelectTripProperties","title":"SelectTripProperties","text":"

Bases: RecordModel

Selection properties for transit trips.

Source code in network_wrangler/models/projects/transit_selection.py
class SelectTripProperties(RecordModel):\n    \"\"\"Selection properties for transit trips.\"\"\"\n\n    trip_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    shape_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    direction_id: Annotated[Optional[int], Field(None)]\n    service_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    route_id: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n    trip_short_name: Annotated[Optional[List[ForcedStr]], Field(None, min_length=1)]\n\n    model_config = ConfigDict(\n        extra=\"allow\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n
"},{"location":"data_models/#network_wrangler.models.projects.transit_selection.TransitABNodesModel","title":"TransitABNodesModel","text":"

Bases: RecordModel

Single transit link model.

Source code in network_wrangler/models/projects/transit_selection.py
class TransitABNodesModel(RecordModel):\n    \"\"\"Single transit link model.\"\"\"\n\n    A: Optional[int] = None  # model_node_id\n    B: Optional[int] = None  # model_node_id\n\n    model_config = ConfigDict(\n        extra=\"forbid\",\n        validate_assignment=True,\n        exclude_none=True,\n        protected_namespaces=(),\n    )\n
"},{"location":"design/","title":"Design","text":""},{"location":"design/#atomic-parts","title":"Atomic Parts","text":"

NetworkWrangler deals with four primary atomic parts:

1. Scenario objects describe a Roadway Network, Transit Network, and collection of Projects. Scenarios manage the addition and construction of projects on the network via projct cards. Scenarios can be based on or tiered from other scenarios.

2. RoadwayNetwork objects stores information about roadway nodes, directed links between nodes, and the shapes of links (note that the same shape can be shared between two or more links). Network Wrangler reads/writes roadway network objects from/to three files: links.json, shapes.geojson, and nodes.geojson. Their data is stored as GeoDataFrames in the object.

3. TransitNetwork objects contain information about stops, routes, trips, shapes, stoptimes, and frequencies. Network Wrangler reads/writes transit network information from/to gtfs csv files and stores them as DataFrames within a Partridge feed object. Transit networks can be associated with Roadway networks.

4.ProjectCard objects store infromation (including metadata) about changes to the network. Network Wtanglr reads project cards from .yml files validates them, and manages them within a Scenario object.

"},{"location":"development/","title":"Development","text":""},{"location":"development/#contributing-to-network-wrangler","title":"Contributing to Network Wrangler","text":""},{"location":"development/#roles","title":"Roles","text":""},{"location":"development/#how-to-contribute","title":"How to Contribute","text":""},{"location":"development/#setup","title":"Setup","text":"
  1. Make sure you have a GitHub account.
  2. Make sure you have git, a terminal (e.g. Mac Terminal, CygWin, etc.), and a text editor installed on your local machine. Optionally, you will likely find it easier to use GitHub Desktop, an IDE instead of a simple text editor like VSCode, Eclipse, Sublime Text, etc.
  3. Fork the repository into your own GitHub account and clone it locally.
  4. Install your network_wrangler clone in development mode: pip install . -e
  5. Install documentation requirements: pip install -r requirements.docs.txt
  6. Install development requirements: pip install -r requirements.tests.txt
  7. [Optional] Install act to run github actions locally.
"},{"location":"development/#development-workflow","title":"Development Workflow","text":"
  1. Create an issue for any features/bugs that you are working on.
  2. Create a branch to work on a new issue (or checkout an existing one where the issue is being worked on).
  3. Develop comprehensive tests in the /tests folder.
  4. Modify code including inline documentation such that it passes all tests (not just your new ones)
  5. Lint code using pre-commit run --all-files
  6. Fill out information in the pull request template
  7. Submit all pull requests to the develop branch.
  8. Core developer will review your pull request and suggest changes.
  9. After requested changes are complete, core developer will sign off on pull-request merge.

!tip: Keep pull requests small and focused. One issue is best.

!tip: Don\u2019t forget to update any associated #documentation as well!

"},{"location":"development/#documentation","title":"Documentation","text":"

Documentation is produced by mkdocs:

Documentation is built and deployed using the mike package and Github Actions configured in .github/workflows/ for each \u201cref\u201d (i.e. branch) in the network_wrangler repository.

"},{"location":"development/#testing-and-continuous-integration","title":"Testing and Continuous Integration","text":"

Tests and test data reside in the /tests directory:

Continuous Integration is managed by Github Actions in .github/workflows. All tests other than those with the decorator @pytest.mark.skipci will be run.

"},{"location":"development/#project-governance","title":"Project Governance","text":"

The project is currently governed by representatives of its two major organizational contributors:

"},{"location":"development/#code-of-conduct","title":"Code of Conduct","text":"

Contributors to the Network Wrangler Project are expected to read and follow the CODE_OF_CONDUCT for the project.

"},{"location":"development/#contributors","title":"Contributors","text":"
  1. Lisa Zorn - initial Network Wrangler implementation at SFCTA
  2. Billy Charlton
  3. Elizabeh Sall
  4. Sijia Wang
  5. David Ory
  6. Ashish K.

!Note: There are likely more contributors - feel free to add your name if we missed it!

"}]} \ No newline at end of file diff --git a/selection-refactor-metcouncil-test/sitemap.xml b/selection-refactor-metcouncil-test/sitemap.xml index 694cb6b4..ec1fc4ee 100644 --- a/selection-refactor-metcouncil-test/sitemap.xml +++ b/selection-refactor-metcouncil-test/sitemap.xml @@ -2,27 +2,27 @@ https://wsp-sag.github.io/network_wrangler/selection-refactor-metcouncil-test/ - 2024-08-01 + 2024-08-02 daily https://wsp-sag.github.io/network_wrangler/selection-refactor-metcouncil-test/api/ - 2024-08-01 + 2024-08-02 daily https://wsp-sag.github.io/network_wrangler/selection-refactor-metcouncil-test/data_models/ - 2024-08-01 + 2024-08-02 daily https://wsp-sag.github.io/network_wrangler/selection-refactor-metcouncil-test/design/ - 2024-08-01 + 2024-08-02 daily https://wsp-sag.github.io/network_wrangler/selection-refactor-metcouncil-test/development/ - 2024-08-01 + 2024-08-02 daily \ No newline at end of file diff --git a/selection-refactor-metcouncil-test/sitemap.xml.gz b/selection-refactor-metcouncil-test/sitemap.xml.gz index c3ffba68dc4ca797aa191f67665d8951576a3276..5961772a6c8fda5052019ffd7e7c7a1b2f00298c 100644 GIT binary patch literal 275 zcmV+u0qp)CiwFqGU#(^W|8r?{Wo=<_E_iKh0NqqEZo@DP-17=UyF}OMkQR=z^#$!5 zs4^o9l_giCbx-9fy)zR0~e z>RmhKK*&`gM>>d}d3*+#=UI_w9Lz{6Wd~&YG7#KBsMV9uyQ=4aw?7g%%ka zB^jrj=YD!uK0!pDQ~o&3*+dsG17SeR#iGo3HkMPC1F}t1v_42iVrXH}T(2<1L+W%> zH;=O3%epC)JGP?PzU;tr#!uZDAHG{yAvf>vYyR(Jn9*+~N@v)dibV{xnMg?9#qFfI ZU`XSD@gL6T+R7iIe*oS?`XX}#008i`gQNfe literal 275 zcmV+u0qp)CiwFqM|Ep#K|8r?{Wo=<_E_iKh0NqqEZo@DP-17=UyF@4GkQR=z^#$!5 zs4^o9l_f`{{@hKDByR z@1A76m-V(#9@v6ryRrk%885muK72Q@K