Skip to content

Commit

Permalink
Merge branch 'main' into refinedelay
Browse files Browse the repository at this point in the history
  • Loading branch information
bbfrederick authored Dec 20, 2024
2 parents de79eed + ba2c98c commit 5dcfa5f
Show file tree
Hide file tree
Showing 30 changed files with 98 additions and 67 deletions.
25 changes: 25 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Codespell configuration is within pyproject.toml
---
name: Codespell

on:
push:
branches: [main]
pull_request:
branches: [main]

permissions:
contents: read

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v4
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@v1
- name: Codespell
uses: codespell-project/actions-codespell@v2
48 changes: 24 additions & 24 deletions CHANGELOG.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ participation in our project and our community to be a harassment-free
experience for everyone.

Although no list can hope to be all-encompassing, we explicitly honor diversity in age,
body size, disability, ethnicity, gender identity and expression, level of experience, native language, education, socio-economic status, nationality, personal appearance, race, religion,
body size, disability, ethnicity, gender identity and expression, level of experience, native language, education, socioeconomic status, nationality, personal appearance, race, religion,
or sexual identity and orientation.

## Our Standards
Expand Down
6 changes: 3 additions & 3 deletions docs/legacy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ For compatibility with old workflows, rapidtide can be called using legacy synta
--permutationmethod=METHOD - Method for permuting the regressor for significance estimation. Default
is shuffle
--skipsighistfit - Do not fit significance histogram with a Johnson SB function
--windowfunc=FUNC - Use FUNC window funcion prior to correlation. Options are
--windowfunc=FUNC - Use FUNC window function prior to correlation. Options are
hamming (default), hann, blackmanharris, and None
--nowindow - Disable precorrelation windowing
-f GAUSSSIGMA - Spatially filter fMRI data prior to analysis using
Expand Down Expand Up @@ -112,7 +112,7 @@ For compatibility with old workflows, rapidtide can be called using legacy synta
--liang - Use generalized cross-correlation with Liang weighting function
(Liang, et al, doi:10.1109/IMCCC.2015.283)
--eckart - Use generalized cross-correlation with Eckart weighting function
--corrmaskthresh=PCT - Do correlations in voxels where the mean exceeeds this
--corrmaskthresh=PCT - Do correlations in voxels where the mean exceeds this
percentage of the robust max (default is 1.0)
--corrmask=MASK - Only do correlations in voxels in MASK (if set, corrmaskthresh
is ignored).
Expand Down Expand Up @@ -224,7 +224,7 @@ For compatibility with old workflows, rapidtide can be called using legacy synta
--acfix - Perform a secondary correlation to disambiguate peak location
(enables --accheck). Experimental.
--tmask=MASKFILE - Only correlate during epochs specified in
MASKFILE (NB: if file has one colum, the length needs to match
MASKFILE (NB: if file has one column, the length needs to match
the number of TRs used. TRs with nonzero values will be used
in analysis. If there are 2 or more columns, each line of MASKFILE
contains the time (first column) and duration (second column) of an
Expand Down
2 changes: 1 addition & 1 deletion docs/theoryofoperation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -480,7 +480,7 @@ Time delay determination

This is the core of the program, that actually does the delay determination.
It's currently divided into two parts -
calculation of a time dependant similarity function between the sLFO regressor and each voxel
calculation of a time dependent similarity function between the sLFO regressor and each voxel
(currently using one of three methods),
and then a fitting step to find the peak time delay and strength of association between the two signals.

Expand Down
2 changes: 1 addition & 1 deletion docs/usage_general.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ waveform, you'd run:
happytest_desc-slicerescardfromfmri_timeseries.json:cardiacfromfmri,cardiacfromfmri_dlfiltered \
--format separate

There are some companion programs - ``showxy`` works on 2D (x, y) data; ``showhist`` is specificaly for viewing
There are some companion programs - ``showxy`` works on 2D (x, y) data; ``showhist`` is specifically for viewing
histograms (``_hist`` files) generated by several programs in the package, ``spectrogram`` generates and displays spectrograms of
time series data. Each of these is separately documented below.

Expand Down
4 changes: 2 additions & 2 deletions docs/usage_rapidtide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -505,7 +505,7 @@ For this type of analysis, a good place to start is the following:

The first option (``--numnull 0``), shuts off the calculation of the null correlation distribution. This is used to
determine the significance threshold, but the method currently implemented in rapidtide is a bit simplistic - it
assumes that all the time points in the data are exchangable. This is certainly true for resting state data (see
assumes that all the time points in the data are exchangeable. This is certainly true for resting state data (see
above), but it is very much NOT true for block paradigm gas challenges. To properly analyze those, I need to
consider what time points are 'equivalent', and up to now, I don't, so setting the number of iterations in the
Monte Carlo analysis to zero omits this step.
Expand Down Expand Up @@ -580,7 +580,7 @@ and more importantly, we have access to some much higher quality NIRS data, this
The majority of the work was already done, I just needed to account for a few qualities that make NIRS data different from fMRI data:

* NIRS data is not generally stored in NIFTI files. While there is one now (SNIRF), at the time I started doing this, there was no standard NIRS file format. In the absence of one, you could do worse than a multicolumn text file, with one column per data channel. That's what I did here - if the file has a '.txt' extension rather than '.nii.', '.nii.gz', or no extension, it will assume all I/O should be done on multicolumn text files. However, I'm a firm believer in SNIRF, and will add support for it one of these days.
* NIRS data is often zero mean. This turned out to mess with a lot of my assumptions about which voxels have significant data, and mask construction. This has led to some new options for specifying mask threshholds and data averaging.
* NIRS data is often zero mean. This turned out to mess with a lot of my assumptions about which voxels have significant data, and mask construction. This has led to some new options for specifying mask thresholds and data averaging.
* NIRS data is in some sense "calibrated" as relative micromolar changes in oxy-, deoxy-, and total hemoglobin concentration, so mean and/or variance normalizing the timecourses may not be right thing to do. I've added in some new options to mess with normalizations.


Expand Down
2 changes: 1 addition & 1 deletion docs/usage_tidepool.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ tidepool

Description:
^^^^^^^^^^^^
Tidepool is a handy tool for displaying all of the various maps generated by rapidtide in one place, overlayed on an anatomic image. This makes it easier to see how all the maps are related to one another. To use it, launch tidepool from the command line, navigate to a rapidtide output directory, and then select a lag time (maxcorr) map. tidpool will figure out the root name and pull in all of the other associated maps, timecourses, and info files. The displays are live, and linked together, so you can explore multiple parameters efficiently. Works in native or standard space.
Tidepool is a handy tool for displaying all of the various maps generated by rapidtide in one place, overlaid on an anatomic image. This makes it easier to see how all the maps are related to one another. To use it, launch tidepool from the command line, navigate to a rapidtide output directory, and then select a lag time (maxcorr) map. tidpool will figure out the root name and pull in all of the other associated maps, timecourses, and info files. The displays are live, and linked together, so you can explore multiple parameters efficiently. Works in native or standard space.

.. image:: images/tidepool_overview.jpg
:align: center
Expand Down
7 changes: 7 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -136,3 +136,10 @@ versionfile_source = 'rapidtide/_version.py'
versionfile_build = 'rapidtide/_version.py'
tag_prefix = 'v'
parentdir_prefix = 'rapidtide-'

[tool.codespell]
# Ref: https://github.com/codespell-project/codespell#using-a-config-file
skip = '.git*,versioneer.py,*.css,exportlist.txt,data,*.bib'
check-hidden = true
ignore-regex = '\bsubjeT\b'
ignore-words-list = 'thex,normall'
2 changes: 1 addition & 1 deletion rapidtide/OrthoImageItem.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ def __init__(
self.offsetz = self.imgsize * (0.5 - self.zfov / (2.0 * self.maxfov))

if self.verbose > 1:
print("OrthoImageItem intialization:")
print("OrthoImageItem initialization:")
print(" Dimensions:", self.xdim, self.ydim, self.zdim)
print(" Voxel sizes:", self.xsize, self.ysize, self.zsize)
print(" FOVs:", self.xfov, self.yfov, self.zfov)
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/correlate.py
Original file line number Diff line number Diff line change
Expand Up @@ -489,7 +489,7 @@ def cross_mutual_info(
-------
if returnaxis is True:
thexmi_x : 1D array
the set of offsets at which cross mutual information is calcuated
the set of offsets at which cross mutual information is calculated
thexmi_y : 1D array
the set of cross mutual information values
len(thexmi_x): int
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/experimental/harmonogram
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ for o, a in opts:
print('turning off legend label')
elif o == '--debug':
debug = True
print('turning debuggin on')
print('turning debugging on')
elif o == '--nowindow':
useHamming = False
print('turning window off')
Expand Down
4 changes: 2 additions & 2 deletions rapidtide/filter.py
Original file line number Diff line number Diff line change
Expand Up @@ -1389,7 +1389,7 @@ def harmonicnotchfilter(timecourse, Fs, Ffundamental, notchpct=1.0, debug=False)
notchpct: float, optional
Width of the notch relative to the filter frequency in percent. Default is 1.0.
debug: bool, optional
Set to True for additiona information on function internals. Default is False.
Set to True for additional information on function internals. Default is False.
Returns
-------
Expand Down Expand Up @@ -1451,7 +1451,7 @@ def csdfilter(
If True, pad by wrapping the data in a cyclic manner rather than reflecting at the ends
debug: bool, optional
Set to True for additiona information on function internals. Default is False.
Set to True for additional information on function internals. Default is False.
Returns
-------
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/happy_supportfuncs.py
Original file line number Diff line number Diff line change
Expand Up @@ -438,7 +438,7 @@ def findbadpts(
thresh = numsigma * sigma
thebadpts = np.where(absdev >= thresh, 1.0, 0.0)
print(
"Bad point threshhold set to",
"Bad point threshold set to",
"{:.3f}".format(thresh),
"using the",
thetype,
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/helper_classes.py
Original file line number Diff line number Diff line change
Expand Up @@ -1340,7 +1340,7 @@ def track(self, x, fs):
print(self.times.shape, self.freqs.shape, thespectrogram.shape)
print(self.times)

# intitialize the peak fitter
# initialize the peak fitter
thefitter = SimilarityFunctionFitter(
corrtimeaxis=self.freqs,
lagmin=self.lowerlim,
Expand Down
1 change: 1 addition & 0 deletions rapidtide/refineregressor.py
Original file line number Diff line number Diff line change
Expand Up @@ -627,6 +627,7 @@ def dorefine(

if cleanrefined:
thefit, R2 = tide_fit.mlregress(averagediscard, averagedata)

fitcoff = rt_floatset(thefit[0, 1])
datatoremove = rt_floatset(fitcoff * averagediscard)
outputdata -= datatoremove
Expand Down
4 changes: 2 additions & 2 deletions rapidtide/scripts/fingerprint
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,8 @@ def _get_parser():
"Atlas. Options are: "
"ASPECTS - ASPECTS territory atlas; "
"ATT - Arterial transit time flow territories; "
"JHU1 - Johns Hopkins level 1 probabalistic arterial flow territories, without ventricles (default); "
"JHU2 - Johns Hopkins level 2 probabalistic arterial flow territories, without ventricles."
"JHU1 - Johns Hopkins level 1 probabilistic arterial flow territories, without ventricles (default); "
"JHU2 - Johns Hopkins level 2 probabilistic arterial flow territories, without ventricles."
),
choices=["ASPECTS", "ATT", "JHU1", "JHU2"],
default="JHU1",
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/scripts/happywarp
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def _get_parser():

parser.add_argument(
"--debug",
help="ouput additional debugging information",
help="output additional debugging information",
action="store_true",
)

Expand Down
2 changes: 1 addition & 1 deletion rapidtide/stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -1036,7 +1036,7 @@ def makemask(image, threshpct=25.0, verbose=False, nozero=False, noneg=False):
print(
f"fracval: {pctthresh:.2f}",
f"threshpct: {threshpct:.2f}",
f"mask threshhold: {threshval:.2f}",
f"mask threshold: {threshval:.2f}",
)
themask = np.where(image > threshval, np.int16(1), np.int16(0))
return themask
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/workflows/atlastool.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def _get_parser():
action="store",
type=lambda x: pf.is_float(parser, x),
metavar="FILE",
help=("Threshhold for autogenerated mask (default is 0.25)."),
help=("Threshold for autogenerated mask (default is 0.25)."),
default=0.25,
)
parser.add_argument(
Expand Down
4 changes: 2 additions & 2 deletions rapidtide/workflows/endtidalproc.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ def _get_parser():
dest="thresh",
metavar="PCT",
type=float,
help="Amount of fall (or rise) needed, in percent, to recognize a peak (or trough).",
help="Amount of fall (or rise) needed, in percent, to recognize a peak (or through).",
default=1.0,
)
parser.add_argument(
Expand Down Expand Up @@ -138,7 +138,7 @@ def endtidalproc():
args.thestarttime = xvec[thestartpoint]
args.theendtime = xvec[theendpoint]

# set parameters - maxtime is the longest to look ahead for a peak (or trough) in seconds
# set parameters - maxtime is the longest to look ahead for a peak (or through) in seconds
# lookahead should be '(samples / period) / f' where '4 >= f >= 1.25' might be a good value
maxtime = 1.0
f = 2.0
Expand Down
8 changes: 4 additions & 4 deletions rapidtide/workflows/happy.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@ def happy_main(argparsingfunc):
if args.fliparteries:
# add another pass to refine the waveform after getting the new appflips
numpasses += 1
print("Adding a pass to regenerate cardiac waveform using bettter appflips")
print("Adding a pass to regenerate cardiac waveform using better appflips")

# output mask size
print(f"estmask has {len(np.where(estmask_byslice[:, :] > 0)[0])} voxels above threshold.")
Expand Down Expand Up @@ -1073,7 +1073,7 @@ def happy_main(argparsingfunc):
infodict["respsamplerate"] = returnedinputfreq
infodict["numresppts_fullres"] = fullrespts

# account for slice time offests
# account for slice time offsets
offsets_byslice = np.zeros((xsize * ysize, numslices), dtype=np.float64)
for i in range(numslices):
offsets_byslice[:, i] = slicetimes[i]
Expand Down Expand Up @@ -1543,12 +1543,12 @@ def happy_main(argparsingfunc):
debug=args.debug,
)

# find vessel threshholds
# find vessel thresholds
tide_util.logmem("before making vessel masks")
hardvesselthresh = tide_stats.getfracvals(np.max(histinput, axis=1), [0.98])[0] / 2.0
softvesselthresh = args.softvesselfrac * hardvesselthresh
print(
"hard, soft vessel threshholds set to",
"hard, soft vessel thresholds set to",
"{:.3f}".format(hardvesselthresh),
"{:.3f}".format(softvesselthresh),
)
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/workflows/happy_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -677,7 +677,7 @@ def process_args(inputargs=None):
args = _get_parser().parse_args(inputargs)
argstowrite = inputargs
except SystemExit:
print("Use --help option for detailed informtion on options.")
print("Use --help option for detailed information on options.")
raise

# save the raw and formatted command lines
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/workflows/pixelcomp.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def _get_parser():
parser.add_argument(
"--scatter",
action="store_true",
help=("Do a scatter plot intstead of a contour plot."),
help=("Do a scatter plot instead of a contour plot."),
default=False,
)
parser.add_argument(
Expand Down
4 changes: 2 additions & 2 deletions rapidtide/workflows/rapidtide.py
Original file line number Diff line number Diff line change
Expand Up @@ -1769,7 +1769,7 @@ def rapidtide_main(argparsingfunc):
)
LGR.verbose(f"edgebufferfrac set to {optiondict['edgebufferfrac']}")

# intitialize the correlation fitter
# initialize the correlation fitter
thefitter = tide_classes.SimilarityFunctionFitter(
lagmod=optiondict["lagmod"],
lthreshval=optiondict["lthreshval"],
Expand Down Expand Up @@ -2227,7 +2227,7 @@ def rapidtide_main(argparsingfunc):
if optiondict["ampthreshfromsig"]:
if pcts is not None:
LGR.info(
f"setting ampthresh to the p < {1.0 - thepercentiles[0]:.3f} threshhold"
f"setting ampthresh to the p < {1.0 - thepercentiles[0]:.3f} threshold"
)
optiondict["ampthresh"] = pcts[0]
tide_stats.printthresholds(
Expand Down
8 changes: 4 additions & 4 deletions rapidtide/workflows/rapidtide_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -1080,7 +1080,7 @@ def _get_parser():
metavar="THRESH",
type=float,
help=(
"Threshhold value (fraction of maximum) in a histogram "
"Threshold value (fraction of maximum) in a histogram "
f"to be considered the start of a peak. Default is {DEFAULT_PICKLEFT_THRESH}."
),
default=DEFAULT_PICKLEFT_THRESH,
Expand Down Expand Up @@ -1160,7 +1160,7 @@ def _get_parser():
type=int,
metavar="MAXPASSES",
help=(
"Terminate refinement after MAXPASSES passes, whether or not convergence has occured. "
"Terminate refinement after MAXPASSES passes, whether or not convergence has occurred. "
f"Default is {DEFAULT_MAXPASSES}."
),
default=DEFAULT_MAXPASSES,
Expand Down Expand Up @@ -1656,7 +1656,7 @@ def _get_parser():
"--focaldebug",
dest="focaldebug",
action="store_true",
help=("Enable targetted additional debugging output (used during development)."),
help=("Enable targeted additional debugging output (used during development)."),
default=False,
)
debugging.add_argument(
Expand Down Expand Up @@ -1798,7 +1798,7 @@ def process_args(inputargs=None):
# what fraction of the correlation window to avoid on either end when
# fitting
args["edgebufferfrac"] = 0.0
# only do fits in voxels that exceed threshhold
# only do fits in voxels that exceed threshold
args["enforcethresh"] = True
# if set to the location of the first autocorrelation sidelobe,
# this will fold back sidelobes
Expand Down
2 changes: 1 addition & 1 deletion rapidtide/workflows/showarbcorr.py
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@ def showarbcorr(args):

thepxcorr = pearsonr(filtereddata1, filtereddata2)

# intitialize the correlation fitter
# initialize the correlation fitter
thexsimfuncfitter = tide_classes.SimilarityFunctionFitter(
corrtimeaxis=xcorr_x,
lagmin=args.lagmin,
Expand Down
4 changes: 2 additions & 2 deletions rapidtide/workflows/showxcorrx.py
Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@ def showxcorrx(args):
)

if args.similaritymetric == "mutualinfo":
# intitialize the similarity function fitter
# initialize the similarity function fitter
themifitter = tide_classes.SimilarityFunctionFitter(
corrtimeaxis=MI_x_trim,
lagmin=args.lagmin,
Expand All @@ -573,7 +573,7 @@ def showxcorrx(args):
)
maxdelaymi = MI_x_trim[np.argmax(theMI_trim)]
else:
# intitialize the correlation fitter
# initialize the correlation fitter
thexsimfuncfitter = tide_classes.SimilarityFunctionFitter(
corrtimeaxis=xcorr_x,
lagmin=args.lagmin,
Expand Down
1 change: 0 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,6 @@
"rapidtide/multiproc",
"rapidtide/patchmatch",
"rapidtide/peakeval",
"rapidtide/refine",
"rapidtide/refinedelay",
"rapidtide/refineregressor",
"rapidtide/resample",
Expand Down
Loading

0 comments on commit 5dcfa5f

Please sign in to comment.