From 18db68b80a975872a3362d3692d6ec16aaccd0c6 Mon Sep 17 00:00:00 2001
From: SCiarella We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. Examples of behavior that contributes to a positive environment for our community include: Examples of unacceptable behavior include: Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at s.ciarella@esciencecenter.nl. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. Community Impact: A violation through a single incident or series of actions. Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. Community Impact: A serious violation of community standards, including sustained inappropriate behavior. Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. Consequence: A permanent ban from any sort of public interaction within the community. This Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder. For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations. Welcome! SpeckCn2 is an open-source project for analysis of speckle patterns. If you're trying SpeckCn2 with your data, your experience, questions, bugs you encountered, and suggestions for improvement are important to the success of the project. We have a Code of Conduct, please follow it in all your interactions with the project. Use the search function to see if someone else already ran accross the same issue. Feel free to open a new issue here to ask a question, suggest improvements/new features, or report any bugs that you ran into. Even better than a good bug report is a fix for the bug or the implementation of a new feature. We welcome any contributions that help improve the code. When contributing to this repository, please first discuss the change you wish to make via an issue with the owners of this repository before making a change. Contributions can come in the form of: We use the usual GitHub pull-request flow. For more info see GitHub's own documentation. Typically this means: One of the code owners will review your code and request changes if needed. Once your changes have been approved, your contributions will become part of GEMDAT. \ud83c\udf89 SpeckCn2 targets Python 3.9 or newer. Clone the repository into the Install using Alternatively, install using Conda: SpeckCn2 uses pytest to run the tests. You can run the tests for yourself using: To check coverage: The documentation is written in markdown, and uses mkdocs to generate the pages. To build the documentation for yourself: You can find the documentation source in the docs directory. If you are adding new pages, make sure to update the listing in the Make a new release. Under 'Choose a tag', set the tag to the new version. The versioning scheme we use is SemVer, so bump the version (major/minor/patch) as needed. Bumping the version is handled transparently by The upload to pypi is triggered when a release is published and handled by this workflow. The upload to zenodo is triggered when a release is published. Optical satellite communications is a growing research field with bright commercial perspectives. One of the challenges for optical links through the atmosphere is turbulence, which is also apparent by the twinkling of stars. The reduction of the quality can be calculated, but it needs the turbulence strength over the path the optical beam is running. Estimation of the turbulence strength is done at astronomic sites, but not at rural or urban sites. To be able to do this, a simple instrument is required. We want to propose to use a single star Scintillation Detection and Ranging (SCIDAR), which is an instrument that can estimate the turbulence strength, based on the observation of a single star. Here, reliable signal processing of the received images of the star is most challenging. We propose to solve this by Machine Learning. The primary objectives of this project are: Turbulence Strength Estimation: Develop a robust algorithm using Machine Learning to estimate turbulence strength based on SCIDAR data. Signal Processing Enhancement: Implement advanced signal processing techniques to improve the accuracy and reliability of turbulence strength calculations. Adaptability to Various Sites: Ensure the proposed solution is versatile enough to be deployed in diverse environments, including rural and urban locations. This repository contains: Machine Learning Models: Implementation of machine learning models tailored for turbulence strength estimation from SCIDAR data. Signal Processing Algorithms: Advanced signal processing algorithms aimed at enhancing the quality of received star images. Dataset: Sample datasets for training and testing the machine learning models. Documentation: In-depth documentation explaining the methodology, algorithms used, and guidelines for using the code. To get started with the project, follow these steps: Install the package:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Api
+ speckcn2
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
"},{"location":"CODE_OF_CONDUCT/#enforcement-responsibilities","title":"Enforcement Responsibilities","text":"
speckcn2
directory:git clone https://github.com/MALES-project/SpeckleCn2Profiler speckcn2\n
virtualenv
:cd speckcn2\npython3 -m venv env\nsource env/bin/activate\npython3 -m pip install -e .[develop]\n
"},{"location":"CONTRIBUTING/#running-tests","title":"Running tests","text":"cd speckcn2\nconda create -n speckcn2 python=3.10\nconda activate speckcn2\npip install -e .[develop]\n
pytest\n
"},{"location":"CONTRIBUTING/#building-the-documentation","title":"Building the documentation","text":"coverage run -m pytest\ncoverage report # to output to terminal\ncoverage html # to generate html report\n
pip install -e .[docs]\nmkdocs serve\n
mkdocs.yml
under the nav
entry.
"},{"location":"home/","title":"Home","text":""},{"location":"home/#specklecn2profiler","title":"SpeckleCn2Profiler:","text":""},{"location":"home/#improving-satellite-communications-with-scidar-and-machine-learning","title":"Improving Satellite Communications with SCIDAR and Machine Learning","text":""},{"location":"home/#overview","title":"Overview","text":"bumpversion
in this workflow.
"},{"location":"home/#repository-contents","title":"Repository Contents","text":"
"},{"location":"home/#getting-started","title":"Getting Started","text":"
while the above command works, python -m pip install git+https://github.com/MALES-project/SpeckleCn2Profiler\n
speckcn2
will be available on pypi as soon as its dependencies get updated.
Explore the Code: Dive into the codebase to understand the implementation details and customize it according to your needs.
We welcome contributions to improve and expand the capabilities of this project. If you have ideas, bug fixes, or enhancements, please submit a pull request. Check out our Contributing Guidelines to get started with development.
"},{"location":"home/#how-to-cite","title":"How to cite","text":"Please consider citing this software that is published in Zenodo under the DOI 10.5281/zenodo.11447920.
"},{"location":"home/#license","title":"License","text":"This project is licensed under the MIT License - see the LICENSE file for details.
"},{"location":"installation/","title":"Installation","text":""},{"location":"installation/#macos-m1-arm64","title":"MacOS M1 arm64","text":"Some dependencies (e.g. scikit
) do not support the latest python version (3.12). Also py3nj
, a dependency of escnn
, requires openmp. We've installed this via homebrew and thus explicitly specifying the C compiler (gnu) prior to installation of this package does the trick.
conda create -n speckcn2 python=3.10\nconda activate speckcn2\nCC=gcc-13 pip3 install py3nj # install py3nj before with gcc instead of clang\npip install -e .\n
"},{"location":"api/api/","title":"Api","text":"EnsembleModel(conf, device)
","text":" Bases: Module
Wrapper that allows any model to be used for ensembled data.
Parameters:
conf
(dict
) \u2013 The global configuration containing the model parameters.
device
(device
) \u2013 The device to use
src/speckcn2/mlmodels.py
def __init__(self, conf: dict, device: torch.device):\n \"\"\"Initializes the EnsembleModel.\n\n Parameters\n ----------\n conf: dict\n The global configuration containing the model parameters.\n device : torch.device\n The device to use\n \"\"\"\n super(EnsembleModel, self).__init__()\n\n self.ensemble_size = conf['preproc'].get('ensemble', 1)\n self.device = device\n self.uniform_ensemble = conf['preproc'].get('ensemble_unif', False)\n resolution = conf['preproc']['resize']\n self.D = conf['noise']['D']\n self.t = conf['noise']['t']\n self.snr = conf['noise']['snr']\n self.dT = conf['noise']['dT']\n self.dO = conf['noise']['dO']\n self.rn = conf['noise']['rn']\n self.fw = conf['noise']['fw']\n self.bit = conf['noise']['bit']\n self.discretize = conf['noise']['discretize']\n self.mask_D, self.mask_d, self.mask_X, self.mask_Y = self.create_masks(\n resolution)\n
"},{"location":"api/models/#speckcn2.mlmodels.EnsembleModel.apply_noise","title":"apply_noise(image_tensor)
","text":"Processes a tensor of 2D images.
Parameters:
image_tensor
(Tensor
) \u2013 Tensor of 2D images with shape (batch, channels, width, height).
Returns:
processed_tensor
( Tensor
) \u2013 Tensor of processed 2D images.
src/speckcn2/mlmodels.py
def apply_noise(self, image_tensor: torch.Tensor) -> torch.Tensor:\n \"\"\"Processes a tensor of 2D images.\n\n Parameters\n ----------\n image_tensor : torch.Tensor\n Tensor of 2D images with shape (batch, channels, width, height).\n\n Returns\n -------\n processed_tensor : torch.Tensor\n Tensor of processed 2D images.\n \"\"\"\n batch, channels, height, width = image_tensor.shape\n processed_tensor = torch.zeros_like(image_tensor)\n\n # Normalize wrt optical power\n image_tensor = image_tensor / torch.mean(\n image_tensor, dim=(2, 3), keepdim=True)\n\n amp = self.rn * 10**(self.snr / 20)\n\n for i in range(batch):\n for j in range(channels):\n B = image_tensor[i, j]\n\n # Apply masks\n B[self.mask_D] = 0\n B[self.mask_d] = 0\n B[self.mask_X] = 0\n B[self.mask_Y] = 0\n\n # Add noise sources\n A = self.rn + self.rn * torch.randn(\n height, width, device=self.device) + amp * B + torch.sqrt(\n amp * B) * torch.randn(\n height, width, device=self.device)\n\n # Make a discretized version\n if self.discretize == 'on':\n C = torch.round(A / self.fw * 2**self.bit)\n C[A > self.fw] = self.fw\n C[A < 0] = 0\n else:\n C = A\n\n processed_tensor[i, j] = C\n\n return processed_tensor\n
"},{"location":"api/models/#speckcn2.mlmodels.EnsembleModel.create_masks","title":"create_masks(resolution)
","text":"Creates the masks for the circular aperture and the spider.
Parameters:
resolution
(int
) \u2013 Resolution of the images.
Returns:
mask_D
( Tensor
) \u2013 Mask for the circular aperture.
mask_d
( Tensor
) \u2013 Mask for the central obscuration.
mask_X
( Tensor
) \u2013 Mask for the horizontal spider.
mask_Y
( Tensor
) \u2013 Mask for the vertical spider.
src/speckcn2/mlmodels.py
def create_masks(\n self, resolution: int\n) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:\n \"\"\"Creates the masks for the circular aperture and the spider.\n\n Parameters\n ----------\n resolution : int\n Resolution of the images.\n\n Returns\n -------\n mask_D : torch.Tensor\n Mask for the circular aperture.\n mask_d : torch.Tensor\n Mask for the central obscuration.\n mask_X : torch.Tensor\n Mask for the horizontal spider.\n mask_Y : torch.Tensor\n Mask for the vertical spider.\n \"\"\"\n # Coordinates\n x = torch.linspace(-1, 1, resolution, device=self.device)\n X, Y = torch.meshgrid(x, x, indexing='ij') # XY grid\n d = self.dO * self.D # Diameter obscuration\n\n R = torch.sqrt(X**2 + Y**2)\n\n # Masking image\n mask_D = R > self.D\n mask_d = R < d\n mask_X = torch.abs(X) < self.t\n mask_Y = torch.abs(Y) < self.t\n\n return mask_D, mask_d, mask_X, mask_Y\n
"},{"location":"api/models/#speckcn2.mlmodels.EnsembleModel.forward","title":"forward(model, batch_ensemble)
","text":"Forward pass through the model.
Parameters:
model
(Module
) \u2013 The model to use
batch_ensemble
(list
) \u2013 Each element is a batch of an ensemble of samples.
src/speckcn2/mlmodels.py
def forward(self, model, batch_ensemble):\n \"\"\"Forward pass through the model.\n\n Parameters\n ----------\n model : torch.nn.Module\n The model to use\n batch_ensemble : list\n Each element is a batch of an ensemble of samples.\n \"\"\"\n\n if self.ensemble_size == 1:\n batch = batch_ensemble\n # If no ensembling, each element of the batch is a tuple (image, tag, ensemble_id)\n images, tags, ensembles = zip(*batch)\n images = torch.stack(images).to(self.device)\n images = self.apply_noise(images)\n tags = torch.tensor(np.stack(tags)).to(self.device)\n\n return model(images), tags, images\n else:\n batch = list(itertools.chain(*batch_ensemble))\n # Like the ensemble=1 case, I can process independently each element of the batch\n images, tags, ensembles = zip(*batch)\n images = torch.stack(images).to(self.device)\n images = self.apply_noise(images)\n tags = torch.tensor(np.stack(tags)).to(self.device)\n\n model_output = model(images)\n\n # To average the self.ensemble_size outputs of the model I extract the confidence weights\n predictions = model_output[:, :-1]\n weights = model_output[:, -1]\n if self.uniform_ensemble:\n weights = torch.ones_like(weights)\n # multiply the prediction by the weights\n weighted_predictions = predictions * weights.unsqueeze(-1)\n # and sum over the ensembles\n weighted_predictions = weighted_predictions.view(\n model_output.size(0) // self.ensemble_size, self.ensemble_size,\n -1).sum(dim=1)\n # then normalize by the sum of the weights\n sum_weights = weights.view(\n weights.size(0) // self.ensemble_size,\n self.ensemble_size).sum(dim=1)\n ensemble_output = weighted_predictions / sum_weights.unsqueeze(-1)\n\n # and get the tags and ensemble_id of the first element of the ensemble\n tags = tags[::self.ensemble_size]\n ensembles = ensembles[::self.ensemble_size]\n\n return ensemble_output, tags, images\n
"},{"location":"api/models/#speckcn2.mlmodels.get_a_resnet","title":"get_a_resnet(config)
","text":"Returns a pretrained ResNet model, with the last layer corresponding to the number of screens.
Parameters:
config
(dict
) \u2013 Dictionary containing the configuration
Returns:
model
( Module
) \u2013 The model with the loaded state
last_model_state
( int
) \u2013 The number of the last model state
src/speckcn2/mlmodels.py
def get_a_resnet(config: dict) -> tuple[nn.Module, int]:\n \"\"\"Returns a pretrained ResNet model, with the last layer corresponding to\n the number of screens.\n\n Parameters\n ----------\n config : dict\n Dictionary containing the configuration\n\n Returns\n -------\n model : torch.nn.Module\n The model with the loaded state\n last_model_state : int\n The number of the last model state\n \"\"\"\n\n model_name = config['model']['name']\n model_type = config['model']['type']\n pretrained = config['model']['pretrained']\n nscreens = config['speckle']['nscreens']\n data_directory = config['speckle']['datadirectory']\n ensemble = config['preproc'].get('ensemble', 1)\n\n if model_type == 'resnet18':\n model = torchvision.models.resnet18(\n weights='IMAGENET1K_V1' if pretrained else None)\n finaloutsize = 512\n elif model_type == 'resnet50':\n model = torchvision.models.resnet50(\n weights='IMAGENET1K_V2' if pretrained else None)\n finaloutsize = 2048\n elif model_type == 'resnet152':\n model = torchvision.models.resnet152(\n weights='IMAGENET1K_V2' if pretrained else None)\n finaloutsize = 2048\n else:\n raise ValueError(f'Unknown model {model_type}')\n\n # If the model uses multiple images as input,\n # add an extra channel as confidence weight\n # to average the final prediction\n if ensemble > 1:\n nscreens = nscreens + 1\n\n # Give it its name\n model.name = model_name\n\n # Change the model to process black and white input\n model.conv1 = torch.nn.Conv2d(1,\n 64,\n kernel_size=(7, 7),\n stride=(2, 2),\n padding=(3, 3),\n bias=False)\n # Add a final fully connected piece to predict the output\n model.fc = create_final_block(config, finaloutsize, nscreens)\n\n return load_model_state(model, data_directory)\n
"},{"location":"api/models/#speckcn2.mlmodels.get_scnn","title":"get_scnn(config)
","text":"Returns a pretrained Spherical-CNN model, with the last layer corresponding to the number of screens.
Source code insrc/speckcn2/mlmodels.py
def get_scnn(config: dict) -> tuple[nn.Module, int]:\n \"\"\"Returns a pretrained Spherical-CNN model, with the last layer\n corresponding to the number of screens.\"\"\"\n\n model_name = config['model']['name']\n model_type = config['model']['type']\n datadirectory = config['speckle']['datadirectory']\n\n model_map = {\n 'scnnC8': 'C8',\n 'scnnC16': 'C16',\n 'scnnC4': 'C4',\n 'scnnC6': 'C6',\n 'scnnC10': 'C10',\n 'scnnC12': 'C12',\n }\n try:\n scnn_model = SteerableCNN(config, model_map[model_type])\n except KeyError:\n raise ValueError(f'Unknown model {model_type}')\n\n scnn_model.name = model_name\n\n return load_model_state(scnn_model, datadirectory)\n
"},{"location":"api/models/#speckcn2.mlmodels.setup_model","title":"setup_model(config)
","text":"Returns the model specified in the configuration file, with the last layer corresponding to the number of screens.
Parameters:
config
(dict
) \u2013 Dictionary containing the configuration
Returns:
model
( Module
) \u2013 The model with the loaded state
last_model_state
( int
) \u2013 The number of the last model state
src/speckcn2/mlmodels.py
def setup_model(config: dict) -> tuple[nn.Module, int]:\n \"\"\"Returns the model specified in the configuration file, with the last\n layer corresponding to the number of screens.\n\n Parameters\n ----------\n config : dict\n Dictionary containing the configuration\n\n Returns\n -------\n model : torch.nn.Module\n The model with the loaded state\n last_model_state : int\n The number of the last model state\n \"\"\"\n\n model_name = config['model']['name']\n model_type = config['model']['type']\n\n print(f'^^^ Initializing model {model_name} of type {model_type}')\n\n if model_type.startswith('resnet'):\n return get_a_resnet(config)\n elif model_type.startswith('scnnC'):\n return get_scnn(config)\n else:\n raise ValueError(f'Unknown model {model_name}')\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to a positive environment for our community include:
Examples of unacceptable behavior include:
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at s.ciarella@esciencecenter.nl. All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
"},{"location":"CODE_OF_CONDUCT/#enforcement-guidelines","title":"Enforcement Guidelines","text":"Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
"},{"location":"CODE_OF_CONDUCT/#1-correction","title":"1. Correction","text":"Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
"},{"location":"CODE_OF_CONDUCT/#2-warning","title":"2. Warning","text":"Community Impact: A violation through a single incident or series of actions.
Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
"},{"location":"CODE_OF_CONDUCT/#3-temporary-ban","title":"3. Temporary Ban","text":"Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
"},{"location":"CODE_OF_CONDUCT/#4-permanent-ban","title":"4. Permanent Ban","text":"Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.
For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
"},{"location":"CONTRIBUTING/","title":"Contributing guidelines","text":"Welcome! SpeckCn2 is an open-source project for analysis of speckle patterns. If you're trying SpeckCn2 with your data, your experience, questions, bugs you encountered, and suggestions for improvement are important to the success of the project.
We have a Code of Conduct, please follow it in all your interactions with the project.
"},{"location":"CONTRIBUTING/#questions-feedback-bugs","title":"Questions, feedback, bugs","text":"Use the search function to see if someone else already ran accross the same issue. Feel free to open a new issue here to ask a question, suggest improvements/new features, or report any bugs that you ran into.
"},{"location":"CONTRIBUTING/#submitting-changes","title":"Submitting changes","text":"Even better than a good bug report is a fix for the bug or the implementation of a new feature. We welcome any contributions that help improve the code.
When contributing to this repository, please first discuss the change you wish to make via an issue with the owners of this repository before making a change.
Contributions can come in the form of:
We use the usual GitHub pull-request flow. For more info see GitHub's own documentation.
Typically this means:
One of the code owners will review your code and request changes if needed. Once your changes have been approved, your contributions will become part of GEMDAT. \ud83c\udf89
"},{"location":"CONTRIBUTING/#getting-started-with-development","title":"Getting started with development","text":""},{"location":"CONTRIBUTING/#setup","title":"Setup","text":"SpeckCn2 targets Python 3.9 or newer.
Clone the repository into the speckcn2
directory:
git clone https://github.com/MALES-project/SpeckleCn2Profiler speckcn2\n
Install using virtualenv
:
cd speckcn2\npython3 -m venv env\nsource env/bin/activate\npython3 -m pip install -e .[develop]\n
Alternatively, install using Conda:
cd speckcn2\nconda create -n speckcn2 python=3.10\nconda activate speckcn2\npip install -e .[develop]\n
"},{"location":"CONTRIBUTING/#running-tests","title":"Running tests","text":"SpeckCn2 uses pytest to run the tests. You can run the tests for yourself using:
pytest\n
To check coverage:
coverage run -m pytest\ncoverage report # to output to terminal\ncoverage html # to generate html report\n
"},{"location":"CONTRIBUTING/#building-the-documentation","title":"Building the documentation","text":"The documentation is written in markdown, and uses mkdocs to generate the pages.
To build the documentation for yourself:
pip install -e .[docs]\nmkdocs serve\n
You can find the documentation source in the docs directory. If you are adding new pages, make sure to update the listing in the mkdocs.yml
under the nav
entry.
Make a new release.
Under 'Choose a tag', set the tag to the new version. The versioning scheme we use is SemVer, so bump the version (major/minor/patch) as needed. Bumping the version is handled transparently by bumpversion
in this workflow.
The upload to pypi is triggered when a release is published and handled by this workflow.
The upload to zenodo is triggered when a release is published.
Optical satellite communications is a growing research field with bright commercial perspectives. One of the challenges for optical links through the atmosphere is turbulence, which is also apparent by the twinkling of stars. The reduction of the quality can be calculated, but it needs the turbulence strength over the path the optical beam is running. Estimation of the turbulence strength is done at astronomic sites, but not at rural or urban sites. To be able to do this, a simple instrument is required. We want to propose to use a single star Scintillation Detection and Ranging (SCIDAR), which is an instrument that can estimate the turbulence strength, based on the observation of a single star. Here, reliable signal processing of the received images of the star is most challenging. We propose to solve this by Machine Learning.
"},{"location":"home/#project-goals","title":"Project Goals","text":"The primary objectives of this project are:
Turbulence Strength Estimation: Develop a robust algorithm using Machine Learning to estimate turbulence strength based on SCIDAR data.
Signal Processing Enhancement: Implement advanced signal processing techniques to improve the accuracy and reliability of turbulence strength calculations.
Adaptability to Various Sites: Ensure the proposed solution is versatile enough to be deployed in diverse environments, including rural and urban locations.
This repository contains:
Machine Learning Models: Implementation of machine learning models tailored for turbulence strength estimation from SCIDAR data.
Signal Processing Algorithms: Advanced signal processing algorithms aimed at enhancing the quality of received star images.
Dataset: Sample datasets for training and testing the machine learning models.
Documentation: In-depth documentation explaining the methodology, algorithms used, and guidelines for using the code.
To get started with the project, follow these steps:
Install the package:
python -m pip install git+https://github.com/MALES-project/SpeckleCn2Profiler\n
while the above command works, speckcn2
will be available on pypi as soon as its dependencies get updated. Explore the Code: Dive into the codebase to understand the implementation details and customize it according to your needs.
We welcome contributions to improve and expand the capabilities of this project. If you have ideas, bug fixes, or enhancements, please submit a pull request. Check out our Contributing Guidelines to get started with development.
"},{"location":"home/#how-to-cite","title":"How to cite","text":"Please consider citing this software that is published in Zenodo under the DOI 10.5281/zenodo.11447920.
"},{"location":"home/#license","title":"License","text":"This project is licensed under the MIT License - see the LICENSE file for details.
"},{"location":"installation/","title":"Installation","text":""},{"location":"installation/#macos-m1-arm64","title":"MacOS M1 arm64","text":"Some dependencies (e.g. scikit
) do not support the latest python version (3.12). Also py3nj
, a dependency of escnn
, requires openmp. We've installed this via homebrew and thus explicitly specifying the C compiler (gnu) prior to installation of this package does the trick.
conda create -n speckcn2 python=3.10\nconda activate speckcn2\nCC=gcc-13 pip3 install py3nj # install py3nj before with gcc instead of clang\npip install -e .\n
"},{"location":"api/api/","title":"speckcn2","text":"EnsembleModel(conf, device)
","text":" Bases: Module
Wrapper that allows any model to be used for ensembled data.
Parameters:
conf
(dict
) \u2013 The global configuration containing the model parameters.
device
(device
) \u2013 The device to use
src/speckcn2/mlmodels.py
def __init__(self, conf: dict, device: torch.device):\n \"\"\"Initializes the EnsembleModel.\n\n Parameters\n ----------\n conf: dict\n The global configuration containing the model parameters.\n device : torch.device\n The device to use\n \"\"\"\n super(EnsembleModel, self).__init__()\n\n self.ensemble_size = conf['preproc'].get('ensemble', 1)\n self.device = device\n self.uniform_ensemble = conf['preproc'].get('ensemble_unif', False)\n resolution = conf['preproc']['resize']\n self.D = conf['noise']['D']\n self.t = conf['noise']['t']\n self.snr = conf['noise']['snr']\n self.dT = conf['noise']['dT']\n self.dO = conf['noise']['dO']\n self.rn = conf['noise']['rn']\n self.fw = conf['noise']['fw']\n self.bit = conf['noise']['bit']\n self.discretize = conf['noise']['discretize']\n self.mask_D, self.mask_d, self.mask_X, self.mask_Y = self.create_masks(\n resolution)\n
"},{"location":"api/models/#speckcn2.mlmodels.EnsembleModel.apply_noise","title":"apply_noise(image_tensor)
","text":"Processes a tensor of 2D images.
Parameters:
image_tensor
(Tensor
) \u2013 Tensor of 2D images with shape (batch, channels, width, height).
Returns:
processed_tensor
( Tensor
) \u2013 Tensor of processed 2D images.
src/speckcn2/mlmodels.py
def apply_noise(self, image_tensor: torch.Tensor) -> torch.Tensor:\n \"\"\"Processes a tensor of 2D images.\n\n Parameters\n ----------\n image_tensor : torch.Tensor\n Tensor of 2D images with shape (batch, channels, width, height).\n\n Returns\n -------\n processed_tensor : torch.Tensor\n Tensor of processed 2D images.\n \"\"\"\n batch, channels, height, width = image_tensor.shape\n processed_tensor = torch.zeros_like(image_tensor)\n\n # Normalize wrt optical power\n image_tensor = image_tensor / torch.mean(\n image_tensor, dim=(2, 3), keepdim=True)\n\n amp = self.rn * 10**(self.snr / 20)\n\n for i in range(batch):\n for j in range(channels):\n B = image_tensor[i, j]\n\n # Apply masks\n B[self.mask_D] = 0\n B[self.mask_d] = 0\n B[self.mask_X] = 0\n B[self.mask_Y] = 0\n\n # Add noise sources\n A = self.rn + self.rn * torch.randn(\n height, width, device=self.device) + amp * B + torch.sqrt(\n amp * B) * torch.randn(\n height, width, device=self.device)\n\n # Make a discretized version\n if self.discretize == 'on':\n C = torch.round(A / self.fw * 2**self.bit)\n C[A > self.fw] = self.fw\n C[A < 0] = 0\n else:\n C = A\n\n processed_tensor[i, j] = C\n\n return processed_tensor\n
"},{"location":"api/models/#speckcn2.mlmodels.EnsembleModel.create_masks","title":"create_masks(resolution)
","text":"Creates the masks for the circular aperture and the spider.
Parameters:
resolution
(int
) \u2013 Resolution of the images.
Returns:
mask_D
( Tensor
) \u2013 Mask for the circular aperture.
mask_d
( Tensor
) \u2013 Mask for the central obscuration.
mask_X
( Tensor
) \u2013 Mask for the horizontal spider.
mask_Y
( Tensor
) \u2013 Mask for the vertical spider.
src/speckcn2/mlmodels.py
def create_masks(\n self, resolution: int\n) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:\n \"\"\"Creates the masks for the circular aperture and the spider.\n\n Parameters\n ----------\n resolution : int\n Resolution of the images.\n\n Returns\n -------\n mask_D : torch.Tensor\n Mask for the circular aperture.\n mask_d : torch.Tensor\n Mask for the central obscuration.\n mask_X : torch.Tensor\n Mask for the horizontal spider.\n mask_Y : torch.Tensor\n Mask for the vertical spider.\n \"\"\"\n # Coordinates\n x = torch.linspace(-1, 1, resolution, device=self.device)\n X, Y = torch.meshgrid(x, x, indexing='ij') # XY grid\n d = self.dO * self.D # Diameter obscuration\n\n R = torch.sqrt(X**2 + Y**2)\n\n # Masking image\n mask_D = R > self.D\n mask_d = R < d\n mask_X = torch.abs(X) < self.t\n mask_Y = torch.abs(Y) < self.t\n\n return mask_D, mask_d, mask_X, mask_Y\n
"},{"location":"api/models/#speckcn2.mlmodels.EnsembleModel.forward","title":"forward(model, batch_ensemble)
","text":"Forward pass through the model.
Parameters:
model
(Module
) \u2013 The model to use
batch_ensemble
(list
) \u2013 Each element is a batch of an ensemble of samples.
src/speckcn2/mlmodels.py
def forward(self, model, batch_ensemble):\n \"\"\"Forward pass through the model.\n\n Parameters\n ----------\n model : torch.nn.Module\n The model to use\n batch_ensemble : list\n Each element is a batch of an ensemble of samples.\n \"\"\"\n\n if self.ensemble_size == 1:\n batch = batch_ensemble\n # If no ensembling, each element of the batch is a tuple (image, tag, ensemble_id)\n images, tags, ensembles = zip(*batch)\n images = torch.stack(images).to(self.device)\n images = self.apply_noise(images)\n tags = torch.tensor(np.stack(tags)).to(self.device)\n\n return model(images), tags, images\n else:\n batch = list(itertools.chain(*batch_ensemble))\n # Like the ensemble=1 case, I can process independently each element of the batch\n images, tags, ensembles = zip(*batch)\n images = torch.stack(images).to(self.device)\n images = self.apply_noise(images)\n tags = torch.tensor(np.stack(tags)).to(self.device)\n\n model_output = model(images)\n\n # To average the self.ensemble_size outputs of the model I extract the confidence weights\n predictions = model_output[:, :-1]\n weights = model_output[:, -1]\n if self.uniform_ensemble:\n weights = torch.ones_like(weights)\n # multiply the prediction by the weights\n weighted_predictions = predictions * weights.unsqueeze(-1)\n # and sum over the ensembles\n weighted_predictions = weighted_predictions.view(\n model_output.size(0) // self.ensemble_size, self.ensemble_size,\n -1).sum(dim=1)\n # then normalize by the sum of the weights\n sum_weights = weights.view(\n weights.size(0) // self.ensemble_size,\n self.ensemble_size).sum(dim=1)\n ensemble_output = weighted_predictions / sum_weights.unsqueeze(-1)\n\n # and get the tags and ensemble_id of the first element of the ensemble\n tags = tags[::self.ensemble_size]\n ensembles = ensembles[::self.ensemble_size]\n\n return ensemble_output, tags, images\n
"},{"location":"api/models/#speckcn2.mlmodels.get_a_resnet","title":"get_a_resnet(config)
","text":"Returns a pretrained ResNet model, with the last layer corresponding to the number of screens.
Parameters:
config
(dict
) \u2013 Dictionary containing the configuration
Returns:
model
( Module
) \u2013 The model with the loaded state
last_model_state
( int
) \u2013 The number of the last model state
src/speckcn2/mlmodels.py
def get_a_resnet(config: dict) -> tuple[nn.Module, int]:\n \"\"\"Returns a pretrained ResNet model, with the last layer corresponding to\n the number of screens.\n\n Parameters\n ----------\n config : dict\n Dictionary containing the configuration\n\n Returns\n -------\n model : torch.nn.Module\n The model with the loaded state\n last_model_state : int\n The number of the last model state\n \"\"\"\n\n model_name = config['model']['name']\n model_type = config['model']['type']\n pretrained = config['model']['pretrained']\n nscreens = config['speckle']['nscreens']\n data_directory = config['speckle']['datadirectory']\n ensemble = config['preproc'].get('ensemble', 1)\n\n if model_type == 'resnet18':\n model = torchvision.models.resnet18(\n weights='IMAGENET1K_V1' if pretrained else None)\n finaloutsize = 512\n elif model_type == 'resnet50':\n model = torchvision.models.resnet50(\n weights='IMAGENET1K_V2' if pretrained else None)\n finaloutsize = 2048\n elif model_type == 'resnet152':\n model = torchvision.models.resnet152(\n weights='IMAGENET1K_V2' if pretrained else None)\n finaloutsize = 2048\n else:\n raise ValueError(f'Unknown model {model_type}')\n\n # If the model uses multiple images as input,\n # add an extra channel as confidence weight\n # to average the final prediction\n if ensemble > 1:\n nscreens = nscreens + 1\n\n # Give it its name\n model.name = model_name\n\n # Change the model to process black and white input\n model.conv1 = torch.nn.Conv2d(1,\n 64,\n kernel_size=(7, 7),\n stride=(2, 2),\n padding=(3, 3),\n bias=False)\n # Add a final fully connected piece to predict the output\n model.fc = create_final_block(config, finaloutsize, nscreens)\n\n return load_model_state(model, data_directory)\n
"},{"location":"api/models/#speckcn2.mlmodels.get_scnn","title":"get_scnn(config)
","text":"Returns a pretrained Spherical-CNN model, with the last layer corresponding to the number of screens.
Source code insrc/speckcn2/mlmodels.py
def get_scnn(config: dict) -> tuple[nn.Module, int]:\n \"\"\"Returns a pretrained Spherical-CNN model, with the last layer\n corresponding to the number of screens.\"\"\"\n\n model_name = config['model']['name']\n model_type = config['model']['type']\n datadirectory = config['speckle']['datadirectory']\n\n model_map = {\n 'scnnC8': 'C8',\n 'scnnC16': 'C16',\n 'scnnC4': 'C4',\n 'scnnC6': 'C6',\n 'scnnC10': 'C10',\n 'scnnC12': 'C12',\n }\n try:\n scnn_model = SteerableCNN(config, model_map[model_type])\n except KeyError:\n raise ValueError(f'Unknown model {model_type}')\n\n scnn_model.name = model_name\n\n return load_model_state(scnn_model, datadirectory)\n
"},{"location":"api/models/#speckcn2.mlmodels.setup_model","title":"setup_model(config)
","text":"Returns the model specified in the configuration file, with the last layer corresponding to the number of screens.
Parameters:
config
(dict
) \u2013 Dictionary containing the configuration
Returns:
model
( Module
) \u2013 The model with the loaded state
last_model_state
( int
) \u2013 The number of the last model state
src/speckcn2/mlmodels.py
def setup_model(config: dict) -> tuple[nn.Module, int]:\n \"\"\"Returns the model specified in the configuration file, with the last\n layer corresponding to the number of screens.\n\n Parameters\n ----------\n config : dict\n Dictionary containing the configuration\n\n Returns\n -------\n model : torch.nn.Module\n The model with the loaded state\n last_model_state : int\n The number of the last model state\n \"\"\"\n\n model_name = config['model']['name']\n model_type = config['model']['type']\n\n print(f'^^^ Initializing model {model_name} of type {model_type}')\n\n if model_type.startswith('resnet'):\n return get_a_resnet(config)\n elif model_type.startswith('scnnC'):\n return get_scnn(config)\n else:\n raise ValueError(f'Unknown model {model_name}')\n
"}]}
\ No newline at end of file