diff --git a/.github/workflows/codespell.yml b/.github/workflows/codespell.yml new file mode 100644 index 00000000..b2316674 --- /dev/null +++ b/.github/workflows/codespell.yml @@ -0,0 +1,25 @@ +# Codespell configuration is within pyproject.toml +--- +name: Codespell + +on: + push: + branches: [main] + pull_request: + branches: [main] + +permissions: + contents: read + +jobs: + codespell: + name: Check for spelling errors + runs-on: ubuntu-latest + + steps: + - name: Checkout + uses: actions/checkout@v4 + - name: Annotate locations with typos + uses: codespell-project/codespell-problem-matcher@v1 + - name: Codespell + uses: codespell-project/actions-codespell@v2 diff --git a/CHANGELOG.md b/CHANGELOG.md index 80b59aa0..7a15e53a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -95,7 +95,7 @@ * (rapidtide) Now specifying brainmask and/or graymattermask will automatically do smart processing enhancements. You can specify whitemattermask, but that currently does nothing except pass the map along. * (rapidtide) pickleft is now the default (you never don't want to do it). The argument to specify it does nothing, but doesn't throw an error. You can turn it off with nopickleft (but, seriously, don't). * (package) In addition to the APARC label macros for specifying atlas values, I've added SSEG_GRAY and SSEG_WHITE to support gray and white mask generation from SynthSeg output. -* (package) Fixed a rarely encounted but serious bug when reading tsv files with missing values - these are now converted to zeros on read rather than propagating NaN's through processing. +* (package) Fixed a rarely encountered but serious bug when reading tsv files with missing values - these are now converted to zeros on read rather than propagating NaN's through processing. ## Version 2.9.4.1 (8/7/24) * (package) Deployment fix @@ -126,9 +126,9 @@ * (rapidtide-cloud) Made nda-download-data take a dataset name. ## Version 2.9.1 (6/12/24) -* (rapidtide) Made some tweaks to confound regresssion. +* (rapidtide) Made some tweaks to confound regression. * (rapidtide) Fixed a logging error when doing noise regressor cleaning. -* (rapidtide2std) Also regeister confoundfilter R2 map. +* (rapidtide2std) Also register confoundfilter R2 map. * (io) Added the ability to read in FSL design.mat file as multicolumn text files. * (qualitycheck) Added some new metrics. @@ -138,7 +138,7 @@ * (rapidtide) Fixed globallaghist output. * (rapidtide2std) Updated for new file names, added option to transform GLM filtered data. * (fixtr) New utility to change the TR in a nifti header. -* (reference) Made a new lut for JHU LVL1 NoVent to try to highlight heirarchy. +* (reference) Made a new lut for JHU LVL1 NoVent to try to highlight hierarchy. * (package) Found and fixed a lot of weird, rare bugs (mostly typos). ## Version 2.8.9.2 (5/22/24) @@ -170,7 +170,7 @@ * (rapidtide) Added new option to start from a random probe regressor. * (rapidtide) Moved performance options into their own section. * (rapidtide) Cleaned up code that reads (or rereads) data prior to GLM. -* (rapidtide) You can specify MKL threads AND multiprocessing - multithreading is disabled and reenabled automatically as needed. +* (rapidtide) You can specify MKL threads AND multiprocessing - multithreading is disabled and re-enabled automatically as needed. * (rapidtide) Do refinement on padded data, to infer data past the ends of the imaging window. * (rapidtide) Save the padded, lagged timecourse generator. * (rapidtide) Scale voxel timecourses prior to PCA refinement. @@ -199,7 +199,7 @@ ## Version 2.8.6 (4/5/24) * (rapidtide) Tweaked the behavior of the ``--CVR`` flag. * (rapidtide) Made major improvements to motion regression. -* (rapidtide) Consolodated all glm filtering into a single multiprocessing routine, added some new capabilities to rapidtide GLM filtering. +* (rapidtide) Consolidated all glm filtering into a single multiprocessing routine, added some new capabilities to rapidtide GLM filtering. * (resampletc) Renamed from resamp1tc to make the program names more consistent. * (package) Made pyfftw an optional dependency, since it seems to be the only thing blocking python 3.12 compatibility. * (package) Added some new tests. @@ -225,7 +225,7 @@ * (package) Accepted several dependabot changes. ## Version 2.8.3 (3/7/24) -* (rapidtide) Fixed the logic for saving lagregressors - they only exist if you do GLM or CVR analysis, so if you set nolimitoutput, check for existance first (thanks to Laura Murray for finding this bug). +* (rapidtide) Fixed the logic for saving lagregressors - they only exist if you do GLM or CVR analysis, so if you set nolimitoutput, check for existence first (thanks to Laura Murray for finding this bug). * (rapidtide) Changed the name of the file containing the voxel specific EVs that are regressed out by the GLM from "lagregressors_bold" to "lfofilterEVs_bold" (thanks to Tianye Zhai for flagging this). * (localflow) Added a new program to test a hunch. * (fit) Gracefully handle singular matrices in mlregress. @@ -423,7 +423,7 @@ * (rapidtide) Fixed a bug in formatting run timings. * (filttc) Now allow normalization before or after filtering. * (showxcorrx) Made fit width limits configurable. -* (calcicc) Moved main calculations into niftistats, made two shells to calculate either icc or ttests. +* (calcicc) Moved main calculations into niftistats, made two shells to calculate either icc or tests. * (package) Disabled numba because of multiple bugs and incompatibility with py3.11 and ARM. * (package) Made some updates to rapidtide cloud routines to make things a bit more stable. @@ -446,7 +446,7 @@ * (docs) Removed duplicate funding source. Hopefully this will resolve the Pypi upload issue. ## Version 2.4.5 (4/10/23) -* (docs) Addded some new sections to theory. +* (docs) Added some new sections to theory. * (package) Completely changed the way I handle and distribute test data. This makes the package much smaller (~17M), which should fix pypi deployment. This involved several changes to the Docker and circleCI workflows, which I think are now stable. ## Version 2.4.4 (3/30/23) @@ -479,7 +479,7 @@ * (Docker) Build python environment with pip rather than conda now. ## Version 2.4.0 (10/6/22) -* (rapidtide) Added enhanced variance removal assesment. +* (rapidtide) Added enhanced variance removal assessment. * (rapidtide) Fixed a rare crashing bug in proctiminglogfile. * (rapidtide) Output some files indicating run status. * (package) Fixed a deprecation warning in pearsonr. @@ -488,7 +488,7 @@ ## Version 2.3.1 (9/27/22) * (Dockerfile) Some tweaks to package versions to try to eliminate error messages. * (Dockerfile) Add some AWS libraries to facilitate using S3 volumes. -* (Dockerfile) Moved timezone data loading earlier in the file to accomodate the new libraries. +* (Dockerfile) Moved timezone data loading earlier in the file to accommodate the new libraries. * (reference) Added HCP_negmask_2mm to improve map display. * (github) Updated codeql actions to v2. @@ -527,7 +527,7 @@ ## Version 2.2.5 (4/26/22) * (rapidtide) Postprocess timing information to make it more useful. -* (rapidtide) Reenabled numba by default. +* (rapidtide) Re-enabled numba by default. * (fingerprint) Fixed handling of 4D atlases, empty regions, and 4D masks. Added "constant" template, and allow 0th order processing (mean). * (atlastood) Fixed 4D atlas handling. Now mask atlas after collapsing to 3D. * (histnifti) Added ``--transform`` flag to map values to percentiles. @@ -569,7 +569,7 @@ * (showtc) Added support for files with large star time offsets. * (showxy) Some appearance tweaks. * (niftidecomp) Improved mask generation. -* (variabilityizer) New progam to transform fMRI datasets to variability measures. +* (variabilityizer) New program to transform fMRI datasets to variability measures. ## Version 2.1.0 (9/21/21) * (spatialmi) Added new program to calculate local mutual information between 3D images. @@ -629,7 +629,7 @@ ## Version 2.0 (6/2/21) Much thanks to Taylor Salo for his continuing contributions, with several substantive improvements to code, documentation, and automatic testing, and generally helping devise a sensible release roadmap that made this version possible. -This release is a big one - there are many new programs, new capabilities in existing programs, and workflow breaking syntax changes. However, this was all with the purpose of making a beter package with much more consistent interfaces that allow you to figure out pretty quickly how to get the programs to do exactly what you want. +This release is a big one - there are many new programs, new capabilities in existing programs, and workflow breaking syntax changes. However, this was all with the purpose of making a better package with much more consistent interfaces that allow you to figure out pretty quickly how to get the programs to do exactly what you want. The biggest change is to rapidtide itself. For several years, there have been two versions of rapidtide; rapidtide2 (the traditional version), and rapidtide2x (the experimental version for testing new features). When features became stable, I migrated them back to rapidtide2, more and more quickly as time went by, so they became pretty much the same. I took the 2.0 release as an opportunity to do some cleanup. As of now, there is only one version of rapidtide, with two parsers. If you call "rapidtide", you get the spiffy new option parser and much more rational and consistent option naming and specification. This is a substantial, but simple, change. For compatibility with old workflows, I preserved the old parser, which is called "rapidtide2x_legacy". This accepts options just as rapidtide2 and rapidtide2x did in version 1.9.6. @@ -668,7 +668,7 @@ Also - all outputs now conform to BIDS naming conventions to improve compatibili * (rapidtide): Any option that takes additional values (numbers, file names, etc.) is now specified as '--option VALUE [VALUE [VALUE...]]' rather than as '--option=VALUE[,VALUE[,VALUE...]]'. * (rapidtide): After a lot of use over the years, I've reset a lot of defaults to reflect typical usage. You can still do any analysis you were doing before, but it may now require changes to scripts and workflows to get the old default behavior. For most cases you can get good analyses with a minimum set of command line options now. * (rapidtide): There are two new macros, --denoise and --delaymapping, which will set defaults to good values for those use cases in subjects without vascular pathology. Any of the preset values for these macros can be overridden with command line options. -* (rapidtide, rapidtide2x_legacy): Regressor and data filtering has been changed significantly. While the nominal filter passbands are the same, the transitions to the stopbands have been tightened up quite a bit. This is most noticable in the LFO band. The pasband is still from 0.01-0.15Hz with a trapezoidal rolloff, but the upper stopband now starts at 0.1575Hz instead of 0.20Hz. The wide transition band was letting in a significant amount of respiratory signal for subjects with low respiratory rates (about half of my subjects seem to breath slower than the nominal adult minimum rate of 12 breaths/minute). +* (rapidtide, rapidtide2x_legacy): Regressor and data filtering has been changed significantly. While the nominal filter passbands are the same, the transitions to the stopbands have been tightened up quite a bit. This is most noticeable in the LFO band. The pasband is still from 0.01-0.15Hz with a trapezoidal rolloff, but the upper stopband now starts at 0.1575Hz instead of 0.20Hz. The wide transition band was letting in a significant amount of respiratory signal for subjects with low respiratory rates (about half of my subjects seem to breath slower than the nominal adult minimum rate of 12 breaths/minute). * (rapidtide): The -V, -L, -R and -C filter band specifiers have been retired. Filter bands are now specified with '--filterband XXX', where XXX is vlf, lfo, lfo_legacy, resp, cardiac, or None. 'lfo' is selected by default (LFO band with sharp transition bands). To skip filtering, use '--filterband None'. '--filterband lfo_legacy' will filter to the LFO band with the old, wide transition bands. * (rapidtide): To specify an arbitrary filter, specify the pass freqs with --filterfreqs, and then optionally the stop freqs with --filterstopfreqs (otherwise the stop freqs will be calculated automatically from the pass freqs). * (rapidtide): The method for specifying the lag search range has changed. '-r LAGMIN,LAGMAX' has been removed. You now use '--searchrange LAGMIN LAGMAX' @@ -688,10 +688,10 @@ Happy also got a new parser and BIDS outputs. You can call happy with the old i * (happy) Added support for scans where there is circulating contrast. General Changes to the entire package: -* (package) Python 2.7 support is now officially ended. Cleaned out compatiblity code. +* (package) Python 2.7 support is now officially ended. Cleaned out compatibility code. * (package) Dropped support for python 3.3-3.5 and added 3.9. * (package) Made pyfftw and numba requirements. -* (package) Significantly increased test coverage by including smoke tests (exercise as many code paths as possible to find crashers in neglected code - this is how the above bugs were found). +* (package) Significantly increased test coverage by including smoke tests (exercise as many code paths as possible to find crashes in neglected code - this is how the above bugs were found). * (package) Automated consistent formatting. black now runs automatically on file updates. * (package) General cleanup and rationalization of imports. isort now runs automatically on file updates. * (package) Fixed a stupid bug that surfaced when reading in all columns of a text file as input. @@ -796,7 +796,7 @@ Miscellaneous changes: * (rapidtide, io) Significant improvement to CIFTI handling - now properly read and write parcellated scalars and time series. * (io) Vastly improved reading in arbitrarily large text files. * (stats) Fixed a bug in getfracvals when you try to find the maximum value. -* (package) Began aggressive implementation of smoke tests (excercise as many code paths as possible to find crashers in neglected code - this is how the above bugs were found). +* (package) Began aggressive implementation of smoke tests (exercise as many code paths as possible to find crashes in neglected code - this is how the above bugs were found). * (package) More logging refinement. ## Version 2.0alpha24 (4/14/21) @@ -824,7 +824,7 @@ Miscellaneous changes: * (rapidtide) Corrected BIDS naming of intermediate maps. ## Version 2.0alpha20 (3/28/21) -* (package) Python 2.7 support is now officially ended. Cleaned out compatiblity code. +* (package) Python 2.7 support is now officially ended. Cleaned out compatibility code. * (package) Made pyfftw and numba requirements. * (docs) Wrote general description of text input functions, enhanced description of happy, include examples. * (style) Began effort with T. Salo to address linter errors and generally improve PEP8 conformance - remove dead code, rationalize imports, improve docstrings, convert class names to CamelCase, use snake_case for functions. @@ -962,7 +962,7 @@ Much thanks to Taylor Salo for his continuing contributions, with several substa * (rapidtide): Any option that takes additional values (numbers, file names, etc.) is now specified as '--option VALUE [VALUE [VALUE...]]' rather than as '--option=VALUE[,VALUE[,VALUE...]]'. * (rapidtide): After a lot of use over the years, I've reset a lot of defaults to reflect typical usage. You can still do any analysis you were doing before, but it may now require changes to scripts and workflows to get the old default behavior. For most cases you can get good analyses with a minimum set of command line options now. * (rapidtide): There are two new macros, --denoise and --delaymapping, which will set defaults to good values for those use cases in subjects without vascular pathology. Any of the preset values for these macros can be overridden with command line options. -* (rapidtide, rapidtide2x): Regressor and data filtering has been changed significantly. While the nominal filter passbands are the same, the transitions to the stopbands have been tightened up quite a bit. This is most noticable in the LFO band. The pasband is still from 0.01-0.15Hz with a trapezoidal rolloff, but the upper stopband now starts at 0.1575Hz instead of 0.20Hz. The wide transition band was letting in a significant amount of respiratory signal for subjects with low respiratory rates (about half of my subjects seem to breath slower than the nominal adult minimum rate of 12 breaths/minute). +* (rapidtide, rapidtide2x): Regressor and data filtering has been changed significantly. While the nominal filter passbands are the same, the transitions to the stopbands have been tightened up quite a bit. This is most noticeable in the LFO band. The pasband is still from 0.01-0.15Hz with a trapezoidal rolloff, but the upper stopband now starts at 0.1575Hz instead of 0.20Hz. The wide transition band was letting in a significant amount of respiratory signal for subjects with low respiratory rates (about half of my subjects seem to breath slower than the nominal adult minimum rate of 12 breaths/minute). * (rapidtide): The -V, -L, -R and -C filter band specifiers have been retired. Filter bands are now specified with '--filterband XXX', where XXX is vlf, lfo, lfo_legacy, resp, cardiac, or None. 'lfo' is selected by default (LFO band with sharp transition bands). To skip filtering, use '--filterband None'. '--filterband lfo_legacy' will filter to the LFO band with the old, wide transition bands. * (rapidtide): To specify an arbitrary filter, use '--filterfreqs LOWERPASS UPPERPASS [LOWERSTOP UPPERSTOP]'. If you don't specify the stop bands, the stop frequencies are set to 5% below and above LOWERPASS and UPPERPASS, respectively. * (rapidtide): The method for specifying the lag search range has changed. '-r LAGMIN,LAGMAX' has been removed. You now use '--searchrange LAGMIN LAGMAX' @@ -1006,8 +1006,8 @@ Much thanks to Taylor Salo for his continuing contributions, with several substa * (rapidtide2x, showxcorrx): Revised internals to guarantee xcorr scale matches values * (rapidtide2x, showxcorrx): Improved fitter performance in edge cases (thin peaks, symmetric around max) * (rapidtide2x, showxcorrx): Changed limits to avoid crash when peak is at edge of range -* (rapidtide2x, showxcorrx): Fixed some (but apparantly not all) dumb errors in calls to null correlation calculations. -* (rapidtide2x): Implemented workaround for unknown crasher in GLM filtering when nprocs != 1 +* (rapidtide2x, showxcorrx): Fixed some (but apparently not all) dumb errors in calls to null correlation calculations. +* (rapidtide2x): Implemented workaround for unknown crash in GLM filtering when nprocs != 1 * (rapidtide2x): Added experimental respiration processing * (rapidtide2x): Fixed an uncaught bug in bipolar processing. * (rapidtide2x): Setting ampthresh to a negative number between 0 and 1 sets the percentile of voxels to use for refinement @@ -1049,7 +1049,7 @@ Much thanks to Taylor Salo for his continuing contributions, with several substa * (tidepool): Removed support for pyqt4 * (tidepool): Some UI tweaks * (tidepool): Added some infrastructure for future support for loading multiple runs -* (tidepool): New atlases to suport fmriprep default coordinates +* (tidepool): New atlases to support fmriprep default coordinates * (tidepool): Numerous bug fixes * (ccorrica): Added the ability to oversample the data prior to crosscorrelation * (showtc): Added ability to select a column from a multicolumn file as input. @@ -1172,7 +1172,7 @@ Much thanks to Taylor Salo for his continuing contributions, with several substa * (workflows) Initial creation (work in progress) (tsalo). * (testing) Reorganized and fixed - now it actually works! (tsalo). * (coverage) Code coverage for testing is now tracked (21% - we can improve significantly with workflows) (tsalo). -* (rapidtide2, 2x, happy) Finally found (and fixed) the reason for a range of random stalls and slowdowns when running on a cluster. MKL extensions were silently distributing some numpy calculations over all cores (which means running N jobs running on a cluster tried to use N^2 cores - not good at all...). The maxiumum number of MKL threads is now settable on the command line, and defaults to 1 (no multiprocessor numpy). Strangely, this makes everything a little faster in single processor mode, and A LOT faster in multiprocessor mode. +* (rapidtide2, 2x, happy) Finally found (and fixed) the reason for a range of random stalls and slowdowns when running on a cluster. MKL extensions were silently distributing some numpy calculations over all cores (which means running N jobs running on a cluster tried to use N^2 cores - not good at all...). The maximum number of MKL threads is now settable on the command line, and defaults to 1 (no multiprocessor numpy). Strangely, this makes everything a little faster in single processor mode, and A LOT faster in multiprocessor mode. * (tide_funcs.py) tide_funcs.py has been split into filter.py, fit.py, io.py, miscmath.py, resample.py, stats.py, and util.py. All executables fixed to match. * (rapidtide2, 2x) Oversample factor is now set automatically by default to make the correlation timestep 0.5 or less. This dramatically improves fits for longer TRs (> 1.5 seconds). * (rapidtide2, 2x) Moved the major passes (null correlation, correlation, correlation fit, refine, wiener filter and glm) into separate modules for maintainability and to simplify tinkering. diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index 830f2b67..5b323fdd 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -5,7 +5,7 @@ participation in our project and our community to be a harassment-free experience for everyone. Although no list can hope to be all-encompassing, we explicitly honor diversity in age, -body size, disability, ethnicity, gender identity and expression, level of experience, native language, education, socio-economic status, nationality, personal appearance, race, religion, +body size, disability, ethnicity, gender identity and expression, level of experience, native language, education, socioeconomic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards diff --git a/docs/legacy.rst b/docs/legacy.rst index e1a74c48..99183c0b 100644 --- a/docs/legacy.rst +++ b/docs/legacy.rst @@ -56,7 +56,7 @@ For compatibility with old workflows, rapidtide can be called using legacy synta --permutationmethod=METHOD - Method for permuting the regressor for significance estimation. Default is shuffle --skipsighistfit - Do not fit significance histogram with a Johnson SB function - --windowfunc=FUNC - Use FUNC window funcion prior to correlation. Options are + --windowfunc=FUNC - Use FUNC window function prior to correlation. Options are hamming (default), hann, blackmanharris, and None --nowindow - Disable precorrelation windowing -f GAUSSSIGMA - Spatially filter fMRI data prior to analysis using @@ -112,7 +112,7 @@ For compatibility with old workflows, rapidtide can be called using legacy synta --liang - Use generalized cross-correlation with Liang weighting function (Liang, et al, doi:10.1109/IMCCC.2015.283) --eckart - Use generalized cross-correlation with Eckart weighting function - --corrmaskthresh=PCT - Do correlations in voxels where the mean exceeeds this + --corrmaskthresh=PCT - Do correlations in voxels where the mean exceeds this percentage of the robust max (default is 1.0) --corrmask=MASK - Only do correlations in voxels in MASK (if set, corrmaskthresh is ignored). @@ -224,7 +224,7 @@ For compatibility with old workflows, rapidtide can be called using legacy synta --acfix - Perform a secondary correlation to disambiguate peak location (enables --accheck). Experimental. --tmask=MASKFILE - Only correlate during epochs specified in - MASKFILE (NB: if file has one colum, the length needs to match + MASKFILE (NB: if file has one column, the length needs to match the number of TRs used. TRs with nonzero values will be used in analysis. If there are 2 or more columns, each line of MASKFILE contains the time (first column) and duration (second column) of an diff --git a/docs/theoryofoperation.rst b/docs/theoryofoperation.rst index 77f72424..27201883 100644 --- a/docs/theoryofoperation.rst +++ b/docs/theoryofoperation.rst @@ -480,7 +480,7 @@ Time delay determination This is the core of the program, that actually does the delay determination. It's currently divided into two parts - -calculation of a time dependant similarity function between the sLFO regressor and each voxel +calculation of a time dependent similarity function between the sLFO regressor and each voxel (currently using one of three methods), and then a fitting step to find the peak time delay and strength of association between the two signals. diff --git a/docs/usage_general.rst b/docs/usage_general.rst index b130b98a..3de41f53 100644 --- a/docs/usage_general.rst +++ b/docs/usage_general.rst @@ -107,7 +107,7 @@ waveform, you'd run: happytest_desc-slicerescardfromfmri_timeseries.json:cardiacfromfmri,cardiacfromfmri_dlfiltered \ --format separate -There are some companion programs - ``showxy`` works on 2D (x, y) data; ``showhist`` is specificaly for viewing +There are some companion programs - ``showxy`` works on 2D (x, y) data; ``showhist`` is specifically for viewing histograms (``_hist`` files) generated by several programs in the package, ``spectrogram`` generates and displays spectrograms of time series data. Each of these is separately documented below. diff --git a/docs/usage_rapidtide.rst b/docs/usage_rapidtide.rst index 77301956..e24cd9a9 100644 --- a/docs/usage_rapidtide.rst +++ b/docs/usage_rapidtide.rst @@ -505,7 +505,7 @@ For this type of analysis, a good place to start is the following: The first option (``--numnull 0``), shuts off the calculation of the null correlation distribution. This is used to determine the significance threshold, but the method currently implemented in rapidtide is a bit simplistic - it -assumes that all the time points in the data are exchangable. This is certainly true for resting state data (see +assumes that all the time points in the data are exchangeable. This is certainly true for resting state data (see above), but it is very much NOT true for block paradigm gas challenges. To properly analyze those, I need to consider what time points are 'equivalent', and up to now, I don't, so setting the number of iterations in the Monte Carlo analysis to zero omits this step. @@ -580,7 +580,7 @@ and more importantly, we have access to some much higher quality NIRS data, this The majority of the work was already done, I just needed to account for a few qualities that make NIRS data different from fMRI data: * NIRS data is not generally stored in NIFTI files. While there is one now (SNIRF), at the time I started doing this, there was no standard NIRS file format. In the absence of one, you could do worse than a multicolumn text file, with one column per data channel. That's what I did here - if the file has a '.txt' extension rather than '.nii.', '.nii.gz', or no extension, it will assume all I/O should be done on multicolumn text files. However, I'm a firm believer in SNIRF, and will add support for it one of these days. -* NIRS data is often zero mean. This turned out to mess with a lot of my assumptions about which voxels have significant data, and mask construction. This has led to some new options for specifying mask threshholds and data averaging. +* NIRS data is often zero mean. This turned out to mess with a lot of my assumptions about which voxels have significant data, and mask construction. This has led to some new options for specifying mask thresholds and data averaging. * NIRS data is in some sense "calibrated" as relative micromolar changes in oxy-, deoxy-, and total hemoglobin concentration, so mean and/or variance normalizing the timecourses may not be right thing to do. I've added in some new options to mess with normalizations. diff --git a/docs/usage_tidepool.rst b/docs/usage_tidepool.rst index 34f2ee5e..37865e62 100644 --- a/docs/usage_tidepool.rst +++ b/docs/usage_tidepool.rst @@ -3,7 +3,7 @@ tidepool Description: ^^^^^^^^^^^^ - Tidepool is a handy tool for displaying all of the various maps generated by rapidtide in one place, overlayed on an anatomic image. This makes it easier to see how all the maps are related to one another. To use it, launch tidepool from the command line, navigate to a rapidtide output directory, and then select a lag time (maxcorr) map. tidpool will figure out the root name and pull in all of the other associated maps, timecourses, and info files. The displays are live, and linked together, so you can explore multiple parameters efficiently. Works in native or standard space. + Tidepool is a handy tool for displaying all of the various maps generated by rapidtide in one place, overlaid on an anatomic image. This makes it easier to see how all the maps are related to one another. To use it, launch tidepool from the command line, navigate to a rapidtide output directory, and then select a lag time (maxcorr) map. tidpool will figure out the root name and pull in all of the other associated maps, timecourses, and info files. The displays are live, and linked together, so you can explore multiple parameters efficiently. Works in native or standard space. .. image:: images/tidepool_overview.jpg :align: center diff --git a/pyproject.toml b/pyproject.toml index 6de03ccc..f7bbadbd 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -136,3 +136,10 @@ versionfile_source = 'rapidtide/_version.py' versionfile_build = 'rapidtide/_version.py' tag_prefix = 'v' parentdir_prefix = 'rapidtide-' + +[tool.codespell] +# Ref: https://github.com/codespell-project/codespell#using-a-config-file +skip = '.git*,versioneer.py,*.css,exportlist.txt,data,*.bib' +check-hidden = true +ignore-regex = '\bsubjeT\b' +ignore-words-list = 'thex,normall' diff --git a/rapidtide/OrthoImageItem.py b/rapidtide/OrthoImageItem.py index 73730f55..f145b9d1 100644 --- a/rapidtide/OrthoImageItem.py +++ b/rapidtide/OrthoImageItem.py @@ -148,7 +148,7 @@ def __init__( self.offsetz = self.imgsize * (0.5 - self.zfov / (2.0 * self.maxfov)) if self.verbose > 1: - print("OrthoImageItem intialization:") + print("OrthoImageItem initialization:") print(" Dimensions:", self.xdim, self.ydim, self.zdim) print(" Voxel sizes:", self.xsize, self.ysize, self.zsize) print(" FOVs:", self.xfov, self.yfov, self.zfov) diff --git a/rapidtide/correlate.py b/rapidtide/correlate.py index 90a2ee29..cd03dce2 100644 --- a/rapidtide/correlate.py +++ b/rapidtide/correlate.py @@ -489,7 +489,7 @@ def cross_mutual_info( ------- if returnaxis is True: thexmi_x : 1D array - the set of offsets at which cross mutual information is calcuated + the set of offsets at which cross mutual information is calculated thexmi_y : 1D array the set of cross mutual information values len(thexmi_x): int diff --git a/rapidtide/experimental/harmonogram b/rapidtide/experimental/harmonogram index 4aca7cc0..34b4e4a4 100755 --- a/rapidtide/experimental/harmonogram +++ b/rapidtide/experimental/harmonogram @@ -239,7 +239,7 @@ for o, a in opts: print('turning off legend label') elif o == '--debug': debug = True - print('turning debuggin on') + print('turning debugging on') elif o == '--nowindow': useHamming = False print('turning window off') diff --git a/rapidtide/filter.py b/rapidtide/filter.py index 9f1ec4b2..4e7c5e4b 100644 --- a/rapidtide/filter.py +++ b/rapidtide/filter.py @@ -1389,7 +1389,7 @@ def harmonicnotchfilter(timecourse, Fs, Ffundamental, notchpct=1.0, debug=False) notchpct: float, optional Width of the notch relative to the filter frequency in percent. Default is 1.0. debug: bool, optional - Set to True for additiona information on function internals. Default is False. + Set to True for additional information on function internals. Default is False. Returns ------- @@ -1451,7 +1451,7 @@ def csdfilter( If True, pad by wrapping the data in a cyclic manner rather than reflecting at the ends debug: bool, optional - Set to True for additiona information on function internals. Default is False. + Set to True for additional information on function internals. Default is False. Returns ------- diff --git a/rapidtide/happy_supportfuncs.py b/rapidtide/happy_supportfuncs.py index e7c777e8..c0fee1e3 100644 --- a/rapidtide/happy_supportfuncs.py +++ b/rapidtide/happy_supportfuncs.py @@ -438,7 +438,7 @@ def findbadpts( thresh = numsigma * sigma thebadpts = np.where(absdev >= thresh, 1.0, 0.0) print( - "Bad point threshhold set to", + "Bad point threshold set to", "{:.3f}".format(thresh), "using the", thetype, diff --git a/rapidtide/helper_classes.py b/rapidtide/helper_classes.py index bb54b84e..b7a4d015 100644 --- a/rapidtide/helper_classes.py +++ b/rapidtide/helper_classes.py @@ -1340,7 +1340,7 @@ def track(self, x, fs): print(self.times.shape, self.freqs.shape, thespectrogram.shape) print(self.times) - # intitialize the peak fitter + # initialize the peak fitter thefitter = SimilarityFunctionFitter( corrtimeaxis=self.freqs, lagmin=self.lowerlim, diff --git a/rapidtide/refineregressor.py b/rapidtide/refineregressor.py index 3f1a2644..b3f4c27f 100644 --- a/rapidtide/refineregressor.py +++ b/rapidtide/refineregressor.py @@ -627,6 +627,7 @@ def dorefine( if cleanrefined: thefit, R2 = tide_fit.mlregress(averagediscard, averagedata) + fitcoff = rt_floatset(thefit[0, 1]) datatoremove = rt_floatset(fitcoff * averagediscard) outputdata -= datatoremove diff --git a/rapidtide/scripts/fingerprint b/rapidtide/scripts/fingerprint index d59b4c73..a596061a 100755 --- a/rapidtide/scripts/fingerprint +++ b/rapidtide/scripts/fingerprint @@ -103,8 +103,8 @@ def _get_parser(): "Atlas. Options are: " "ASPECTS - ASPECTS territory atlas; " "ATT - Arterial transit time flow territories; " - "JHU1 - Johns Hopkins level 1 probabalistic arterial flow territories, without ventricles (default); " - "JHU2 - Johns Hopkins level 2 probabalistic arterial flow territories, without ventricles." + "JHU1 - Johns Hopkins level 1 probabilistic arterial flow territories, without ventricles (default); " + "JHU2 - Johns Hopkins level 2 probabilistic arterial flow territories, without ventricles." ), choices=["ASPECTS", "ATT", "JHU1", "JHU2"], default="JHU1", diff --git a/rapidtide/scripts/happywarp b/rapidtide/scripts/happywarp index 5253dad3..ce60c438 100755 --- a/rapidtide/scripts/happywarp +++ b/rapidtide/scripts/happywarp @@ -122,7 +122,7 @@ def _get_parser(): parser.add_argument( "--debug", - help="ouput additional debugging information", + help="output additional debugging information", action="store_true", ) diff --git a/rapidtide/stats.py b/rapidtide/stats.py index d16f8ed4..51eb05a8 100644 --- a/rapidtide/stats.py +++ b/rapidtide/stats.py @@ -1036,7 +1036,7 @@ def makemask(image, threshpct=25.0, verbose=False, nozero=False, noneg=False): print( f"fracval: {pctthresh:.2f}", f"threshpct: {threshpct:.2f}", - f"mask threshhold: {threshval:.2f}", + f"mask threshold: {threshval:.2f}", ) themask = np.where(image > threshval, np.int16(1), np.int16(0)) return themask diff --git a/rapidtide/workflows/atlastool.py b/rapidtide/workflows/atlastool.py index 4cd880cf..96db249c 100755 --- a/rapidtide/workflows/atlastool.py +++ b/rapidtide/workflows/atlastool.py @@ -75,7 +75,7 @@ def _get_parser(): action="store", type=lambda x: pf.is_float(parser, x), metavar="FILE", - help=("Threshhold for autogenerated mask (default is 0.25)."), + help=("Threshold for autogenerated mask (default is 0.25)."), default=0.25, ) parser.add_argument( diff --git a/rapidtide/workflows/endtidalproc.py b/rapidtide/workflows/endtidalproc.py index aeb5552d..560d62be 100755 --- a/rapidtide/workflows/endtidalproc.py +++ b/rapidtide/workflows/endtidalproc.py @@ -90,7 +90,7 @@ def _get_parser(): dest="thresh", metavar="PCT", type=float, - help="Amount of fall (or rise) needed, in percent, to recognize a peak (or trough).", + help="Amount of fall (or rise) needed, in percent, to recognize a peak (or through).", default=1.0, ) parser.add_argument( @@ -138,7 +138,7 @@ def endtidalproc(): args.thestarttime = xvec[thestartpoint] args.theendtime = xvec[theendpoint] - # set parameters - maxtime is the longest to look ahead for a peak (or trough) in seconds + # set parameters - maxtime is the longest to look ahead for a peak (or through) in seconds # lookahead should be '(samples / period) / f' where '4 >= f >= 1.25' might be a good value maxtime = 1.0 f = 2.0 diff --git a/rapidtide/workflows/happy.py b/rapidtide/workflows/happy.py index acbd998e..ea414744 100644 --- a/rapidtide/workflows/happy.py +++ b/rapidtide/workflows/happy.py @@ -331,7 +331,7 @@ def happy_main(argparsingfunc): if args.fliparteries: # add another pass to refine the waveform after getting the new appflips numpasses += 1 - print("Adding a pass to regenerate cardiac waveform using bettter appflips") + print("Adding a pass to regenerate cardiac waveform using better appflips") # output mask size print(f"estmask has {len(np.where(estmask_byslice[:, :] > 0)[0])} voxels above threshold.") @@ -1073,7 +1073,7 @@ def happy_main(argparsingfunc): infodict["respsamplerate"] = returnedinputfreq infodict["numresppts_fullres"] = fullrespts - # account for slice time offests + # account for slice time offsets offsets_byslice = np.zeros((xsize * ysize, numslices), dtype=np.float64) for i in range(numslices): offsets_byslice[:, i] = slicetimes[i] @@ -1543,12 +1543,12 @@ def happy_main(argparsingfunc): debug=args.debug, ) - # find vessel threshholds + # find vessel thresholds tide_util.logmem("before making vessel masks") hardvesselthresh = tide_stats.getfracvals(np.max(histinput, axis=1), [0.98])[0] / 2.0 softvesselthresh = args.softvesselfrac * hardvesselthresh print( - "hard, soft vessel threshholds set to", + "hard, soft vessel thresholds set to", "{:.3f}".format(hardvesselthresh), "{:.3f}".format(softvesselthresh), ) diff --git a/rapidtide/workflows/happy_parser.py b/rapidtide/workflows/happy_parser.py index bf8a7e02..c017c67e 100755 --- a/rapidtide/workflows/happy_parser.py +++ b/rapidtide/workflows/happy_parser.py @@ -677,7 +677,7 @@ def process_args(inputargs=None): args = _get_parser().parse_args(inputargs) argstowrite = inputargs except SystemExit: - print("Use --help option for detailed informtion on options.") + print("Use --help option for detailed information on options.") raise # save the raw and formatted command lines diff --git a/rapidtide/workflows/pixelcomp.py b/rapidtide/workflows/pixelcomp.py index e77b7315..a014da33 100755 --- a/rapidtide/workflows/pixelcomp.py +++ b/rapidtide/workflows/pixelcomp.py @@ -57,7 +57,7 @@ def _get_parser(): parser.add_argument( "--scatter", action="store_true", - help=("Do a scatter plot intstead of a contour plot."), + help=("Do a scatter plot instead of a contour plot."), default=False, ) parser.add_argument( diff --git a/rapidtide/workflows/rapidtide.py b/rapidtide/workflows/rapidtide.py index 45667f44..90be1dfe 100755 --- a/rapidtide/workflows/rapidtide.py +++ b/rapidtide/workflows/rapidtide.py @@ -1769,7 +1769,7 @@ def rapidtide_main(argparsingfunc): ) LGR.verbose(f"edgebufferfrac set to {optiondict['edgebufferfrac']}") - # intitialize the correlation fitter + # initialize the correlation fitter thefitter = tide_classes.SimilarityFunctionFitter( lagmod=optiondict["lagmod"], lthreshval=optiondict["lthreshval"], @@ -2227,7 +2227,7 @@ def rapidtide_main(argparsingfunc): if optiondict["ampthreshfromsig"]: if pcts is not None: LGR.info( - f"setting ampthresh to the p < {1.0 - thepercentiles[0]:.3f} threshhold" + f"setting ampthresh to the p < {1.0 - thepercentiles[0]:.3f} threshold" ) optiondict["ampthresh"] = pcts[0] tide_stats.printthresholds( diff --git a/rapidtide/workflows/rapidtide_parser.py b/rapidtide/workflows/rapidtide_parser.py index 0a91c6b3..1888b8cd 100755 --- a/rapidtide/workflows/rapidtide_parser.py +++ b/rapidtide/workflows/rapidtide_parser.py @@ -1080,7 +1080,7 @@ def _get_parser(): metavar="THRESH", type=float, help=( - "Threshhold value (fraction of maximum) in a histogram " + "Threshold value (fraction of maximum) in a histogram " f"to be considered the start of a peak. Default is {DEFAULT_PICKLEFT_THRESH}." ), default=DEFAULT_PICKLEFT_THRESH, @@ -1160,7 +1160,7 @@ def _get_parser(): type=int, metavar="MAXPASSES", help=( - "Terminate refinement after MAXPASSES passes, whether or not convergence has occured. " + "Terminate refinement after MAXPASSES passes, whether or not convergence has occurred. " f"Default is {DEFAULT_MAXPASSES}." ), default=DEFAULT_MAXPASSES, @@ -1656,7 +1656,7 @@ def _get_parser(): "--focaldebug", dest="focaldebug", action="store_true", - help=("Enable targetted additional debugging output (used during development)."), + help=("Enable targeted additional debugging output (used during development)."), default=False, ) debugging.add_argument( @@ -1798,7 +1798,7 @@ def process_args(inputargs=None): # what fraction of the correlation window to avoid on either end when # fitting args["edgebufferfrac"] = 0.0 - # only do fits in voxels that exceed threshhold + # only do fits in voxels that exceed threshold args["enforcethresh"] = True # if set to the location of the first autocorrelation sidelobe, # this will fold back sidelobes diff --git a/rapidtide/workflows/showarbcorr.py b/rapidtide/workflows/showarbcorr.py index 704a45b2..700ca117 100755 --- a/rapidtide/workflows/showarbcorr.py +++ b/rapidtide/workflows/showarbcorr.py @@ -377,7 +377,7 @@ def showarbcorr(args): thepxcorr = pearsonr(filtereddata1, filtereddata2) - # intitialize the correlation fitter + # initialize the correlation fitter thexsimfuncfitter = tide_classes.SimilarityFunctionFitter( corrtimeaxis=xcorr_x, lagmin=args.lagmin, diff --git a/rapidtide/workflows/showxcorrx.py b/rapidtide/workflows/showxcorrx.py index 5d6552fb..5980e702 100755 --- a/rapidtide/workflows/showxcorrx.py +++ b/rapidtide/workflows/showxcorrx.py @@ -558,7 +558,7 @@ def showxcorrx(args): ) if args.similaritymetric == "mutualinfo": - # intitialize the similarity function fitter + # initialize the similarity function fitter themifitter = tide_classes.SimilarityFunctionFitter( corrtimeaxis=MI_x_trim, lagmin=args.lagmin, @@ -573,7 +573,7 @@ def showxcorrx(args): ) maxdelaymi = MI_x_trim[np.argmax(theMI_trim)] else: - # intitialize the correlation fitter + # initialize the correlation fitter thexsimfuncfitter = tide_classes.SimilarityFunctionFitter( corrtimeaxis=xcorr_x, lagmin=args.lagmin, diff --git a/setup.py b/setup.py index 0f29b299..3d212e80 100644 --- a/setup.py +++ b/setup.py @@ -49,7 +49,6 @@ "rapidtide/multiproc", "rapidtide/patchmatch", "rapidtide/peakeval", - "rapidtide/refine", "rapidtide/refinedelay", "rapidtide/refineregressor", "rapidtide/resample", diff --git a/setupbackup/setup.py b/setupbackup/setup.py index 062854e6..07c80e5a 100644 --- a/setupbackup/setup.py +++ b/setupbackup/setup.py @@ -9,11 +9,11 @@ from codecs import open from os import path +import versioneer + # Always prefer setuptools over distutils from setuptools import find_namespace_packages, find_packages, setup -import versioneer - here = path.abspath(path.dirname(__file__)) # Get the long description from the README file @@ -49,8 +49,7 @@ "rapidtide/miscmath", "rapidtide/multiproc", "rapidtide/peakeval", - "rapidtide/refine", - "rapidtide/refine_factored", + "rapidtide/refineregressor", "rapidtide/resample", "rapidtide/simfuncfit", "rapidtide/stats",