Skip to content
sbodorkos edited this page Apr 18, 2017 · 11 revisions

2016.06.21

Step 4: Calculation of 'spot-mean' for each 'ratio of interest' for each analysis

Having derived the Ndod (= Nscans - 1) interpolated ratio values and errors in Step 3, the next step is to combine these values into a single isotopic ratio (per 'ratio of interest', per analysis), to match the values tabulated for the measured nuclide-ratios on the StandardData and SampleData sheets of the SQUID-workbook.

SQUID offers users a choice of calculation method: EITHER the error-weighted linear regression of interpolated ratios vs time, with the regression-error calculated at analysis midtime OR the error-weighted mean of interpolated ratios and its error, independent of time. (In fact, SQUID allows sub-choices, depending on whether each 'ratio of interest' involves nuclides of a single element or not, but we will ignore the finer distinctions for the moment, partly because Geoscience Australia doesn't use them anyway.)

This implies the existence of a Boolean 'UserLinFits' (= TRUE when the user preference is for mean ratios to be calculated via error-weighted linear regression vs time, FALSE when the user preference is for mean ratios to be calculated via error-weighted mean independent of time).

As stipulated in the early part of Step 3, single-scan data (i.e. Ndod = Rct = N = 0) will never be subjected to Dodson interpolation, and thus will never occur as input to WtdLinCorr. Similarly, at the end of Step 3, specific provision is made for two-scan data (i.e. Ndod = Rct = N = 1), because although Dodson interpolation has taken place, it has generated only a single value and error, so there is nothing to 'average'. Therefore, Step 4 is applicable only to nuclide-ratios collected over three or more scans (i.e. Ndod = Rct = N > 1; admittedly, this encompasses the vast majority of 'proper' SHRIMP data).

But in addition to this, Ludwig specified that linear regression to analysis midtime calculation is not permitted unless Ndod = Rct = N > 3, regardless of the specified UserLinFits (i.e. if UserLinFits = TRUE, but N = 2 or 3, then the calculation is performed as error-weighted average instead, without user consultation). These scenarios are best handled by logic preceding the call of the calculation routines (as per the code below), but it should be noted that the WtdLinCorr subroutine (invoked below) contains its own check, as a backup that is redundant in this particular (Step 4) application of WtdLinCorr subroutine.

NOTE that all the following 'Step 4' code sits within the final "Else..." loop of the code documented in Step 3. The 'Step 4' code proceeds:

  If UserLinFits = True And Rct > 3
    --Calculate error-wtd linear regression of ratios vs time, with error calculated  
    --at mid burn-time, using index-adjacent ratio error-correlations
    WtdLinCorr 2, Rct, InterpRatVal, SigRho, MSWD, Probfit, 0, RatioInter, ...
      SigmaRatioInter, Bad, RatioSlope, SigmaRatioSlope, CovSlopeInter, RatioInterpTime

    MidTime = ( time_stamp_sec[Nscans, Nspecies] + time_stamp_sec[1, 1] ) / 2  
    RatioMean = (RatioSlope * MidTime) + RatioInter
    RatioMeanSig = sqrt( (MidTime * SigmaRatioSlope) ^ 2 + ...
      (SigmaRatioInter ^ 2) + (2 * MidTime * CovSlopeInter) )

  Else  
    --Calculate error-wtd mean, using index-adjacent ratio error-correlations
    WtdLinCorr 1, Rct, InterpRatVal, SigRho, MSWD, Probfit, 0, RatioMean, ...
      RatioMeanSig, Bad

  End If

The WtdLinCorr subroutine is documented separately (https://github.com/CIRDLES/ET_Redux/wiki/SHRIMP:-Sub-WtdLinCorr) as it is a generalised function used by SQUID 2.50 in a variety of contexts. For the purpose of Step 4-specific orientation of the above code relative to WtdLinCorr:

  • the first WtdLinCorr argument (i.e. 1 or 2) is the input integer value of Avg1LinRegr2.
  • the second WtdLinCorr argument (i.e. Rct from Step 3) is the input integer value of N.
  • the third WtdLinCorr argument (i.e. InterpRatVal from Step 3) is a vector of length N, as the input Y.
  • the fourth WtdLinCorr argument (i.e. SigRho from Step 3) is an array of size N x N, as the input SigRho.
  • the remaining arguments of WtdLinCorr are outputs of the subroutine.

Testing of Step 4 requires, in addition to the WtdLinCorr subroutine, the DeletePoint (https://github.com/CIRDLES/ET_Redux/wiki/SHRIMP:-Sub-DeletePoint), WeightedLinearCorr (https://github.com/CIRDLES/ET_Redux/wiki/SHRIMP:-Sub-WeightedLinearCorr), and WtdAvCorr (https://github.com/CIRDLES/ET_Redux/wiki/SHRIMP:-Sub-WtdAvCorr) subroutines, all of which are now documented.

Once the WtdLinCorr subroutine has returned values of RatioMean and RatioMeanSig:

If Bad = TRUE --error-trapping  
  RatioVal = [error value]
  RatioFractErr = [error value]
ElseIf RatioMean = 0  
  RatioVal = 1E-32
  RatioFractErr = 1
Else  
  RatioVal = RatioMean
  RatioFractErr = Max(1E-32, RatioMeanSig) / Abs(RatioVal)  
  
  --Bodorkos 2017-04-04: next 2 lines inserted to force VBA-Java match  
  RatioVal = SigDigStrDbl( RatioVal, 12 ) --12 sig. digits
  RatioFractErr = SigDigStrDbl( RatioFractErr, 12 ) --12 sig. digits
End If  

NOTE that all the foregoing code sits within the final "Else..." loop of the code documented in Step 3.

The final step (i.e. when reaching the end of the Step 3 code-blocks, not just the Step 4 code-blocks) is to multiply each RatioFractErr by 100, to make it a percentage. In SQUID, this operation is part of the cell-filling routine, which I don't think is particularly helpful.

Sanity check: Mean ratio and percentage error, per 'ratio of interest', per analysis
This array will have one row per analysis, and two columns per 'ratio of interest' (mean value and percentage error). For the demo XML, the array will have 114 rows of data (one per analysis), and 23 columns (3 for row identifiers, then 2 for each of the 10 'ratios of interest').

It needs 3 'left-hand' columns to allow the rows to be identified and sorted:

  • Title = analysis-specific text-string read from XML
  • Date = analysis-specific date read from XML, to be expressed as YYYY-MM-DD HH24:MI:SS
  • Type = 'ref mat' or 'unknown'; analyses with prefix 'T.' to be labelled 'ref mat', all others 'unknown'

These are to be followed by 2 columns for each 'ratio of interest' (i.e. 20 columns for the demo XML):

  • [NUM/DEN].Value = calculated decimal value for 'RatioVal' from Step 4 (labelled by looking up a list of the 'ratios of interest'), for the specified combination of analysis and 'ratio of interest'.
  • [NUM.DEN].1SigmaPct = calculated percentage value for 'RatioFractErr' from Step 4, for the specified combination of analysis and 'ratio of interest'.

Sorting: Primary criterion = Type (ascending; 'ref mat' before 'unknown', so alphabetical would do), secondary criterion = Date (ascending).

Clone this wiki locally