Skip to content

Commit

Permalink
Add Rosetta V2 processing functionality (#319)
Browse files Browse the repository at this point in the history
* Initial commit of Rosetta V2

* PYCODESTYLE

* Clarify documentation

* Add comments to variables in the notebook

* Allow user to list all runs for testing as well

* Add option to turn off test set generation for Rosetta V2

* Separate out Rosetta V2 process into a new notebook

* Remove run names comment

* Clarify comment

* Skip percent_norm for Rosetta V2

* Remove comment to set percent_norm to None

* Address more comment requests

* Add all channels back into V2

* Add a new V2 commercial matrix

* Remove timing printouts for Rosetta

* Use "Round 2" instead of "v2"

* Further disambiguate v2 and Round 2

* Fix format of round 2 Rosetta matrix

* Add final_output_channel_names back in

* Address documentation and remove timeit imports

* Update description

* Remove old panel_utils.py

* Test ._ removal

* Ensure hidden ._ files get deleted

* Typo in notebook

* Implement scheme to copy uncompensated images from round 1 to round 2

* Make non_output_targets an explicit list to work with io_utils

* Documentation fixes

* Add scheme to combine compensation files together

* Clarify documentation for round 2 compensation combination

* Reverting out of compensation, leave that to PR

* Re-add compensation scheme

* Undo commit, this should be for rosetta_v2_comp

* Add method to combine compensation matrices for Rosetta V2 (#366)

* Add compensation combination scheme

* Fully flesh out Rosetta V2 pipeline

* Documentation and variable changes for consistency

* Fix comma spacing in subprocess call

* Implement less strict run checking for R1R2 Rosetta copying (round two just needs to be a subset of round one)

* Implement some notebook QOL changes

* Ensure the names of the current_channel and output_channel match that in the example matrix row and column respectively
  • Loading branch information
alex-l-kong authored Aug 11, 2023
1 parent f4d7f87 commit 62e1f84
Show file tree
Hide file tree
Showing 6 changed files with 720 additions and 22 deletions.
47 changes: 47 additions & 0 deletions files/commercial_rosetta_matrix_round2.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
,39,48,56,69,71,89,113,115,117,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,197
39,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
48,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
56,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
69,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
71,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
89,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
113,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
115,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
117,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
141,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
142,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
143,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
144,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
145,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
147,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
148,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
149,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
150,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
151,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
152,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
153,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
154,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
155,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
156,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
157,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
158,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
159,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
160,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
161,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
162,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
163,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
164,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
165,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
166,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
167,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
168,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
169,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
170,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
171,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
172,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
173,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
174,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
175,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
176,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
197,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
106 changes: 96 additions & 10 deletions src/toffy/rosetta.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import os
import random
import shutil
import subprocess
import warnings

import natsort as ns
Expand Down Expand Up @@ -157,6 +158,9 @@ def clean_rosetta_test_dir(folder_path):
folder_path (str): base dir for testing, image subdirs will be stored here
"""

# remove any files beginning with ._, needed to ensure external drive hidden files clear
_ = subprocess.call(["find", folder_path, "-type", "f", "-name", "._*", "-delete"])

# remove the compensated data folders
comp_folders = io_utils.list_folders(folder_path, substrs="compensated_data_")
for cf in comp_folders:
Expand All @@ -166,6 +170,34 @@ def clean_rosetta_test_dir(folder_path):
shutil.rmtree(os.path.join(folder_path, "stitched_images"))


def combine_compensation_files(comp_matrix_path, compensation_matrix_names, final_matrix_name):
"""Combine a list of round two compensation matrix files in a given cohort folder.
This is done additively since round two compensation files are mutually exclusive w.r.t.
output channels.
Args:
cohort_folder_path (str):
Path to the compensation matrix files to combine
compensation_matrix_names (list):
List of files inside `cohort_folder_path` to combine
final_matrix_name (str):
Where to write the combined compensation matrix to
"""

# load in the first matrix inside compensation_matrix_names
final_compensation_matrix = pd.read_csv(
os.path.join(comp_matrix_path, compensation_matrix_names[0])
)

# loop over the rest and add them in
for matrix in compensation_matrix_names[1:]:
final_compensation_matrix = final_compensation_matrix.add(
pd.read_csv(os.path.join(comp_matrix_path, matrix))
)

# save the final compensation matrix to final_matrix_name
final_compensation_matrix.to_csv(os.path.join(comp_matrix_path, final_matrix_name), index=False)


def flat_field_correction(img, gaus_rad=100):
"""Apply flat field correction to an image
Expand Down Expand Up @@ -254,7 +286,6 @@ def compensate_image_data(
correct_streaks (bool): whether to correct streaks in the image
streak_chan (str): the channel to use for streak correction
"""

io_utils.validate_paths([raw_data_dir, comp_data_dir, comp_mat_path])

# get list of all fovs
Expand Down Expand Up @@ -369,6 +400,42 @@ def compensate_image_data(
image_utils.save_image(save_path, comp_data[j, :, :, idx])


def copy_round_one_compensated_images(
round_one_comp_folder, round_two_comp_folder, channels_to_copy
):
"""Copies channels that don't need round two compensation to the round two comp folder
Args:
round_one_comp_folder (str):
path to the round one Rosetta compensated images
round_two_comp_folder (str):
path to the round two Rosetta compensated images
channels_to_copy (list):
channels to copy from round_one_comp_folder to round_two_comp_folder
"""
io_utils.validate_paths([round_one_comp_folder, round_two_comp_folder])

# verify runs found in round two Rosetta folder also found in round one Rosetta folders
r1_runs = io_utils.list_folders(round_one_comp_folder)
r2_runs = io_utils.list_folders(round_two_comp_folder)
misc_utils.verify_in_list(round_one_comp_fovs=r1_runs, round_two_comp_fovs=r2_runs)

# for each FOV, copy the channel from their r1_runs folder to r2_runs folder
for run in r2_runs:
fovs = io_utils.list_folders(os.path.join(round_one_comp_folder, run), substrs="fov")

for fov in fovs:
channel_files = io_utils.list_files(
os.path.join(round_one_comp_folder, run, fov, "rescaled"), substrs=channels_to_copy
)

for cf in channel_files:
shutil.copy(
os.path.join(round_one_comp_folder, run, fov, "rescaled", cf),
os.path.join(round_two_comp_folder, run, fov, "rescaled", cf),
)


def create_tiled_comparison(
input_dir_list, output_dir, max_img_size, img_sub_folder="rescaled", channels=None
):
Expand All @@ -390,8 +457,14 @@ def create_tiled_comparison(
channels=channels,
)

channels = test_data.channels.values
chanel_num = len(channels)
if not channels:
channels = test_data.channels.values

misc_utils.verify_in_list(
provided_channels=channels, test_data_channels=test_data.channels.values
)

channel_num = len(channels)

# check that all dirs have the same number of fovs and correct subset of channels
fov_names = io_utils.list_folders(input_dir_list[0])
Expand All @@ -406,7 +479,7 @@ def create_tiled_comparison(
fov_num = len(fov_names)

# loop over each channel
for j in range(chanel_num):
for j in range(channel_num):
# create tiled array of dirs x fovs
tiled_image = np.zeros(
(max_img_size * len(input_dir_list), max_img_size * fov_num),
Expand Down Expand Up @@ -451,7 +524,8 @@ def add_source_channel_to_tiled_image(
img_sub_folder (str): subfolder within raw_img_dir to load images from
max_img_size (int): largest fov image size
source_channel (str): the channel which will be prepended to the tiled images
percent_norm (int): percentile normalization param to enable easy visualization
percent_norm (int): percentile normalization param to enable easy visualization, set to
None to skip this step
"""

# load source images
Expand All @@ -465,7 +539,9 @@ def add_source_channel_to_tiled_image(
# convert stacked images to concatenated row
source_list = [source_imgs.values[fov, :, :, 0] for fov in range(source_imgs.shape[0])]
source_row = np.concatenate(source_list, axis=1)
perc_source = np.percentile(source_row, percent_norm)

# get percentile of source row if percent_norm set, otherwise leave unset
perc_source = np.percentile(source_row, percent_norm) if percent_norm else None

# confirm tiled images have expected shape
tiled_images = io_utils.list_files(tiled_img_dir)
Expand All @@ -481,9 +557,13 @@ def add_source_channel_to_tiled_image(
for tile_name in tiled_images:
current_tile = io.imread(os.path.join(tiled_img_dir, tile_name))

# normalize the source row to be in the same range as the current tile
perc_tile = np.percentile(current_tile, percent_norm)
perc_ratio = perc_source / perc_tile
# if percent_norm set, normalize the source row to be in the same range as the current tile
# otherwise, just leave as is (divide by 1)
perc_ratio = 1
if percent_norm:
perc_tile = np.percentile(current_tile, percent_norm)
perc_ratio = perc_source / perc_tile

rescaled_source = source_row / perc_ratio

# combine together and save
Expand Down Expand Up @@ -709,9 +789,12 @@ def generate_rosetta_test_imgs(
panel,
current_channel_name="Noodle",
output_channel_names=None,
gaus_rad=1,
norm_const=1,
ffc_masses=[39],
):
"""Compensate example FOV images based on given multipliers
Args:
rosetta_mat_path (str): path to rosetta compensation matrix
img_out_dir (str): directory where extracted images are stored
Expand All @@ -720,6 +803,8 @@ def generate_rosetta_test_imgs(
panel (pd.DataFrame): the panel containing the masses and channel names
current_channel_name (str): channel being adjusted, default Noodle
output_channel_names (list): subset of the channels to compensate for, default None is all
gaus_rad: radius for blurring image data. Passing 0 will result in no blurring
norm_const: constant used for rescaling
ffc_masses (list): masses that need to be flat field corrected.
Returns:
Expand Down Expand Up @@ -765,7 +850,8 @@ def generate_rosetta_test_imgs(
raw_data_sub_folder="rescaled",
panel_info=panel,
batch_size=1,
norm_const=1,
gaus_rad=gaus_rad,
norm_const=norm_const,
output_masses=output_masses,
ffc_masses=ffc_masses,
)
37 changes: 30 additions & 7 deletions templates/4a_compensate_image_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,11 @@
"# run specifications\n",
"cohort_name = '20220101_new_cohort'\n",
"run_names = ['20220101_TMA1', '20220102_TMA2']\n",
"panel_path = 'C:\\\\Users\\\\Customer.ION\\\\Documents\\\\panel_files\\\\my_cool_panel.csv'"
"panel_path = 'C:\\\\Users\\\\Customer.ION\\\\Documents\\\\panel_files\\\\my_cool_panel.csv'\n",
"extracted_imgs_dir = 'D:\\\\Extracted_Images'\n",
"\n",
"# if you would like to process all of the run folders in the image dir instead of just the runs tested, you can use the below line\n",
"# run_names = list_folders(extracted_imgs_dir)"
]
},
{
Expand Down Expand Up @@ -151,10 +155,20 @@
"We'll now process the images with rosetta to remove signal contamination at varying levels. **By default we'll be testing out coefficient multipliers in proportion to their value in the default matrix for the Noodle channel, since it is the main source of noise in most images.** For example, specifying multipliers of 0.5, 1, and 2 would test coefficients that are half the size, the same size, and twice the size of the Noodle coefficients in the default matrix, respectively. **This will give us a new set of compensated images, using different values in each compensation matrix.**"
]
},
{
"cell_type": "markdown",
"id": "48985d3b-524a-4d8a-8fe2-9abdcfc09f35",
"metadata": {},
"source": [
"* `current_channel_name`: the channel that you will be optimizing the coefficient for.\n",
"* `multipliers`: the range of values to multiply the default matrix by to get new coefficients.\n",
"* `folder_name`: the name of the folder to store the Rosetta data. This will be placed in `rosetta_testing_dir/cohort_name`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "38596718-1c90-4fc1-9ecc-3390b20eefe2",
"execution_count": null,
"id": "138ba84e-40ae-4b63-bda4-8a4b8078ec46",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -193,7 +207,7 @@
"else:\n",
" os.makedirs(folder_path)\n",
"\n",
"# compensate the example fov images \n",
"# compensate the example fov images\n",
"rosetta.generate_rosetta_test_imgs(rosetta_mat_path, img_out_dir, multipliers, folder_path, \n",
" panel, current_channel_name, output_channel_names=None)"
]
Expand Down Expand Up @@ -229,7 +243,8 @@
"output_dir = os.path.join(rosetta_testing_dir, cohort_name, folder_name + '-stitched_with_' + current_channel_name)\n",
"os.makedirs(output_dir)\n",
"rosetta.add_source_channel_to_tiled_image(raw_img_dir=img_out_dir, tiled_img_dir=stitched_dir,\n",
" output_dir=output_dir, source_channel=current_channel_name, max_img_size=img_size)\n",
" output_dir=output_dir, source_channel=current_channel_name,\n",
" max_img_size=img_size)\n",
"\n",
"# remove the intermediate compensated_data_{mult} and stitched_image dirs\n",
"rosetta.clean_rosetta_test_dir(folder_path)"
Expand All @@ -248,7 +263,7 @@
"id": "963ee77e-d096-4e87-8fa5-d4615bd23f3d",
"metadata": {},
"source": [
"There will now exist a folder named `folder_name-stitched_with_Noodle` (based on the folder name you provided above for this test) in your cohort testing directory. You can look through these stitched images to visualize what signal is being removed from the Noodle channel.\n",
"There will now exist a folder named `{folder_name}-stitched_with_Noodle` (based on the folder name you provided above for this test) in your cohort testing directory. You can look through these stitched images to visualize what signal is being removed from the Noodle channel.\n",
"\n",
"These files will contain 5 rows of images: \n",
"- row 1: the Noodle signal\n",
Expand Down Expand Up @@ -286,7 +301,7 @@
"\n",
"# copy final rosetta matrix to matrix folder\n",
"rosetta_matrix_dir = 'D:\\\\Rosetta_processing\\\\rosetta_matrices'\n",
"shutil.copyfile(rosetta_path, os.path.join(rosetta_matrix_dir, final_matrix_name))"
"_ = shutil.copyfile(rosetta_path, os.path.join(rosetta_matrix_dir, final_matrix_name))"
]
},
{
Expand Down Expand Up @@ -395,6 +410,14 @@
" comp_data_dir=os.path.join(rosetta_image_dir, run), \n",
" comp_mat_path=final_rosetta_path, panel_info=panel, batch_size=1)"
]
},
{
"cell_type": "markdown",
"id": "17ee464b-795d-495e-9b3f-e45a9629eb20",
"metadata": {},
"source": [
"<b>NOTE: If you wish to run a second round of Rosetta to further denoise specific channels, please head to the [Rosetta Round 2 notebook](https://github.com/angelolab/toffy/blob/main/templates/4a_compensate_image_data_v2.ipynb).</b>"
]
}
],
"metadata": {
Expand Down
Loading

0 comments on commit 62e1f84

Please sign in to comment.