<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Metrology Rules &#187; Uncategorized</title>
	<atom:link href="https://metrologyrules.com/category/uncategorized/feed/" rel="self" type="application/rss+xml" />
	<link>https://metrologyrules.com</link>
	<description>For Metrologists by Metrologists</description>
	<lastBuildDate>Sat, 04 Apr 2026 01:00:40 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.0.38</generator>
	<item>
		<title>Interpolating between discrete calibration points: least squares curve fitting</title>
		<link>https://metrologyrules.com/interpolating-between-discrete-calibration-points-least-squares-curve-fitting/</link>
		<comments>https://metrologyrules.com/interpolating-between-discrete-calibration-points-least-squares-curve-fitting/#comments</comments>
		<pubDate>Sat, 04 Apr 2026 01:00:40 +0000</pubDate>
		<dc:creator><![CDATA[Hans Liedberg]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://metrologyrules.com/?p=977</guid>
		<description><![CDATA[ABSTRACT In preceding publications, the accuracy of interpolation using reference functions for temperature sensors (thermocouples and platinum resistance thermometers (PRTs)), as well as a crude approach to interpolation uncertainty in the absence of any knowledge of the interpolating function, have been discussed. This article discusses how to fit curves to PRT calibration results using the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>ABSTRACT<br />
In preceding publications, the accuracy of <a href="https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty/" title="Interpolating between discrete calibration points: the effect on uncertainty" rel="noopener" target="_blank">interpolation using reference functions for temperature sensors</a> (thermocouples and platinum resistance thermometers (PRTs)), as well as a <a href="https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty-addendum/" title="Interpolating between discrete calibration points: the effect on uncertainty - Addendum" rel="noopener" target="_blank">crude approach to interpolation uncertainty in the absence of any knowledge of the interpolating function</a>, have been discussed. This article discusses how to fit curves to PRT calibration results using the linear least squares technique (implemented using matrix functions in Microsoft Excel or Libreoffice Calc), both directly to the measured data and to deviations from a reference function. (An overview of the fitting method may be found in [<a href="https://s3.amazonaws.com/nrbook.com/book_F210.html" title="Numerical Recipes in Fortran 77" rel="noopener" target="_blank">Numerical Recipes in Fortran 77, Chapter 15. Modeling of data</a>].)</p>
<p>INTRODUCTION</p>
<p>In the preceding articles, we saw that<br />
 <a href="https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty/" title="Interpolating between discrete calibration points: the effect on uncertainty" rel="noopener" target="_blank">(i)</a> industrial PRT calibration results could be interpolated &#8220;piece-wise&#8221; (between any two calibration points a few hundred degrees Celsius apart) to an accuracy of around 0.05 °C, when expressed as deviations from the <a href="https://en.wikipedia.org/wiki/Callendar%E2%80%93Van_Dusen_equation" title="Callendar-van Dusen equations" rel="noopener" target="_blank">Callendar-van Dusen reference functions</a>, and,<br />
 <a href="https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty-addendum/" title="Interpolating between discrete calibration points: the effect on uncertainty - Addendum" rel="noopener" target="_blank">(ii)</a> without any knowledge of the accuracy of the interpolating function, the accuracy of interpolated values could be grossly estimated as 0.15 to 0.5 °C (for the above-mentioned spacing between calibration points).<br />
While the latter approach can be applied generically to almost any calibration data, it has the drawback that the uncertainty of interpolated values is unrealistically large when working close to a calibration point. In the present article, the most accurate approach to the problem will be implemented, namely, to fit a curve to the complete set of calibration results. Firstly, a Callendar equation will be fitted directly to the measured data, and, secondly, a quadratic polynomial will be fitted to the deviations of the measured data from the <a href="https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=904572" title="ITS-90 PRT reference and deviation functions" rel="noopener" target="_blank">ITS-90 PRT reference function</a>. As the ITS-90 reference function deals with most PRT non-linearity, it is expected that the latter approach will produce the more accurate interpolation.</p>
<p>THE MODEL EQUATION</p>
<p>The Callendar equation, applicable above 0 °C, is: Rt = R0∙(1 + A∙t + B∙t^2), where Rt is the resistance (in Ω) at temperature t (in °C) and R0 is the resistance (in Ω) at the ice point (0.00 °C). Rearranging the equation, A∙t + B∙t^2 = Rt/R0 &#8211; 1, indicating that the independent variable x = t and the dependent variable y = Rt/R0 &#8211; 1.<br />
(As the sensitivity. d(Rt/R0)/dt, will be required to convert uncertainties from temperature units, we also note that d(Rt/R0)/dt = A + 2∙B∙t. We will use the coefficients from IEC 60751, namely, A = 3.9083e-3 and B = -5.775e-7, to calculate sensitivities below.)</p>
<p>The measured data are as follows:<br />
<figure id="attachment_991" style="width: 293px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/IPRT_raw_data.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/IPRT_raw_data.png" alt="IPRT calibration data." width="293" height="142" class="size-full wp-image-991" /></a><figcaption class="wp-caption-text">IPRT calibration data.</figcaption></figure><br />
To find R0 from R(0.01 °C): R0 = R(0.01 °C) + (0.00 °C &#8211; 0.01 °C) ∙ 0.391 Ω/°C. Uncertainties are converted to the same units as y. (The 1st data point is used to determine R0, so only the four subsequent data points are used in the fit.)<br />
<figure id="attachment_996" style="width: 381px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/IPRT_x_y_U1.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/IPRT_x_y_U1.png" alt="IPRT data converted to appropriate units." width="381" height="136" class="size-full wp-image-996" /></a><figcaption class="wp-caption-text">IPRT data converted to appropriate units.</figcaption></figure><br />
The general model equation is c1∙f1(x) + c2∙f2(x) + &#8230; = y. For the Callendar equation, c1 = A, f1(x) = x, c2 = B and f2(x) = x^2.<br />
The four data points lead to four simultaneous equations, namely:<br />
c1∙f1(x1) + c2∙f2(x1) = y1<br />
c1∙f1(x2) + c2∙f2(x2) = y2<br />
c1∙f1(x3) + c2∙f2(x3) = y3<br />
c1∙f1(x4) + c2∙f2(x4) = y4<br />
Expressed in matrix notation, they are:<br />
<figure id="attachment_998" style="width: 548px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Simultaneous_equations_unweighted.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Simultaneous_equations_unweighted.png" alt="The unweighted simultaneous equations, expressed in matrix notation, are: A x c = y." width="548" height="88" class="size-full wp-image-998" /></a><figcaption class="wp-caption-text">The unweighted simultaneous equations, expressed in matrix notation, are: A x c = y.</figcaption></figure></p>
<p>WEIGHTING FACTORS</p>
<p>Now, apply the weighting factors 1/ui to each equation (smaller std uncertainty => larger weight). The weighted simultaneous equations, expressed in matrix notation, are: W x A x c = W x y.<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Weighting_matrix_W1.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Weighting_matrix_W1.png" alt="Weighting_matrix_W" width="383" height="86" class="aligncenter size-full wp-image-1003" /></a></p>
<p>THE SOLUTION &#8211; NORMAL EQUATIONS</p>
<p>The optimal (least squares) solution to this over-determined system is obtained by left-multiplying by the matrix transpose (WA)T = ATxWT, to obtain the normal equations:<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Normal_equations.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Normal_equations.png" alt="Normal_equations" width="195" height="20" class="aligncenter size-full wp-image-1005" /></a><br />
Matrix WTxW simply contains 1/variance at each calibration point:<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Matrix_WtW.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Matrix_WtW.png" alt="Matrix_WtW" width="383" height="88" class="aligncenter size-full wp-image-1007" /></a><br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Coeff_c1.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Coeff_c1.png" alt="Coeff_c" width="882" height="39" class="aligncenter size-full wp-image-1013" /></a><br />
Then, to find the fitted y-value for any value of x:<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/y_for_x.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/y_for_x.png" alt="y_for_x" width="473" height="43" class="aligncenter size-full wp-image-1010" /></a><br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Variance_of_y.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Variance_of_y.png" alt="Variance_of_y" width="706" height="54" class="aligncenter size-full wp-image-1011" /></a><br />
Finally, expanded uncertainty of y = 2*√variance(y): this is called the “propagated uncertainty” of y at x.</p>
<p>The numerical implementation of this “linear least squares” curve-fitting technique follows, using the Excel or Calc matrix functions <a href="https://support.microsoft.com/en-us/office/transpose-function-ed039415-ed8a-4a81-93e9-4b6dfac76027" title="TRANSPOSE" rel="noopener" target="_blank">TRANSPOSE()</a>, <a href="https://support.microsoft.com/en-us/office/mmult-function-40593ed7-a3cd-4b6b-b9a3-e4ad3c7245eb" title="MMULT" rel="noopener" target="_blank">MMULT()</a> and <a href="https://support.microsoft.com/en-us/office/minverse-function-11f55086-adde-4c9f-8eb9-59da2d72efc6" title="MINVERSE" rel="noopener" target="_blank">MINVERSE()</a>:<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Numerical_implementation.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Numerical_implementation.png" alt="Numerical_implementation" width="798" height="344" class="aligncenter size-full wp-image-1015" /></a><br />
Now to check the fitted curve against the measured data (residual = measured – fitted):<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Residuals.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Residuals.png" alt="Residuals" width="793" height="112" class="aligncenter size-full wp-image-1018" /></a><br />
Are the residuals small enough, or, does the model fit the data well enough, relative to the uncertainties?: The chi-squared statistic, chi^2 = [(residual_1/u1)^2 + (residual_2/u2)^2 + …], will tell us:<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Chi-squared.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Chi-squared.png" alt="Chi-squared" width="380" height="121" class="aligncenter size-full wp-image-1019" /></a><br />
Chi-squared is larger than the degrees of freedom (= number of data points &#8211; number of fitted parameters = 4-2), so either the uncertainties are underestimated, or the model does not represent the data well (relative to the uncertainties). If chi^2 ≲ d.o.f., then the fit is “good enough”. (This is only strictly true if the uncertainties at different points are uncorrelated, but we will assume that this is the case.)</p>
<p>DEVIATION FROM ITS-90 REFERENCE FUNCTION</p>
<p>Perhaps direct fitting to the Callendar equation is not good enough, at this level of uncertainty. Let&#8217;s try deviations from the ITS-90 reference function &#8211; the reference function should take care of much of the PRT&#8217;s non-linearity, leaving us with smoother data to fit.</p>
<p>For the range 0 to 660 °C, the ITS-90 document gives the deviation function as: W-Wr = a∙(W-1) + b∙(W-1)^2 + c∙(W-1)^3, where W = Rt/Rtp and Wr is the value of the reference function.<br />
It is good practice in curve fitting to use as few coefficients as will adequately fit the data, so we will try two, a and b.<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/ITS-90_dev_data.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/ITS-90_dev_data.png" alt="ITS-90_dev_data" width="465" height="139" class="aligncenter size-full wp-image-1025" /></a><br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Dev_implementation.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Dev_implementation.png" alt="Dev_implementation" width="797" height="340" class="aligncenter size-full wp-image-1026" /></a><br />
Now to check the fitted curve against the measured data (residual = measured – fitted):<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Resid_dev.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Resid_dev.png" alt="Resid_dev" width="793" height="110" class="aligncenter size-full wp-image-1028" /></a><br />
It can be seen that this fit is better than the previous one. (Residuals are around half the size.)<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Chi2_dev.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Chi2_dev.png" alt="Chi2_dev" width="383" height="123" class="aligncenter size-full wp-image-1029" /></a><br />
Chi-squared is smaller than the degrees of freedom, indicating that the curve fits the data well, within the uncertainties.</p>
<p>Below is a table of fitted values. Because of the structure of the ITS-90 functions, the uncertainty of the curve is zero at the water triple point (WTP, 0.01 °C). The uncertainty at WTP is added to the uncertainty of the curve, in the rightmost column.<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Fitted_table.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/04/Fitted_table.png" alt="Fitted_table" width="794" height="487" class="aligncenter size-full wp-image-1030" /></a></p>
<p>CONCLUSIONS</p>
<p>∙Both curves, fitted directly to measured PRT data, and to deviations from a reference function, represent the behaviour of the UUT better than do the individual data points. (The curves take into account all the data points, thereby potentially &#8220;smoothing out&#8221; random errors in measurement.)<br />
∙The ITS-90 deviation function fits the data better, as is expected when using a good reference function.<br />
∙When weighted least squares fitting is performed, the covariance matrix provides a statistically justified manner of propagating uncertainty to intermediate points, which results in small uncertainties. (To be strictly correct, we should have considered possible correlation between data points, but, as long as the dominant uncertainty component(s) are uncorrelated, our approach is reasonable.) </p>
<p>(Contact the author at LMC-Solutions.co.za.)</p>
]]></content:encoded>
			<wfw:commentRss>https://metrologyrules.com/interpolating-between-discrete-calibration-points-least-squares-curve-fitting/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Interpolating between discrete calibration points: the effect on uncertainty &#8211; Addendum</title>
		<link>https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty-addendum/</link>
		<comments>https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty-addendum/#comments</comments>
		<pubDate>Fri, 03 Apr 2026 01:00:38 +0000</pubDate>
		<dc:creator><![CDATA[Hans Liedberg]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://metrologyrules.com/?p=917</guid>
		<description><![CDATA[ABSTRACT This publication follows &#8220;Interpolating between discrete calibration points: the effect on uncertainty&#8221; of September 2025, describing a more general approach to interpolation of calibration uncertainty (not specific to a particular type of sensor), drawing a tentative conclusion regarding the spacing of calibration points versus resultant uncertainty, and applying this approach to digital thermometer, PRT [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>ABSTRACT<br />
This publication follows &#8220;<a href="https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty/" title="Interpolating between discrete calibration points: the effect on uncertainty" rel="noopener" target="_blank">Interpolating between discrete calibration points: the effect on uncertainty</a>&#8221; of September 2025, describing a more general approach to interpolation of calibration uncertainty (not specific to a particular type of sensor), drawing a tentative conclusion regarding the spacing of calibration points versus resultant uncertainty, and applying this approach to digital thermometer, PRT and thermocouple calibration data.</p>
<p>INTERPOLATING UNCERTAINTY &#8220;BY EYE&#8221;</p>
<p>Here are two sets of calibration results, reporting correction or error of the Unit Under Test (UUT), and expanded uncertainty, at various temperatures:<br />
<figure id="attachment_927" style="width: 448px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/DG_IPRT_bare2.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/DG_IPRT_bare2.png" alt="Calibration results of a digital thermometer with type K thermocouple sensor, and an industrial PRT." width="448" height="154" class="size-full wp-image-927" /></a><figcaption class="wp-caption-text">Calibration results of a digital thermometer with type K thermocouple sensor, and an industrial PRT.</figcaption></figure><br />
Looking at the left-hand (digital thermometer) results, what do we observe?:<br />
1. The correction at all three temperatures is effectively constant, relative to the calibration uncertainty.<br />
2. The UUT is a thermocouple thermometer with a range of -50 to 300 °C. So, none of the calibration temperatures appears to be &#8220;special&#8221;. (In this context, &#8220;special&#8221; temperatures are those where the thermometer might be adjusted to have small corrections, for example, the ends of the operating range, room temperature (where all signal comes from Cold Junction Compensation and none from the sensor), and temperatures where the measuring electronics change range. For a liquid-in-glass thermometer, &#8220;special&#8221; temperatures would be scale pointing marks.)<br />
These observations lead us to a (tentative) conclusion: The thermometer correction is stable at three &#8220;random&#8221; temperatures, so the correction at intermediate temperatures can probably be estimated with confidence, without any increase in uncertainty.</p>
<p>Now, looking at the right-hand (IPRT) results, we see:<br />
1. The error varies significantly between calibration points, relative to the uncertainty.<br />
2. The error varies fairly linearly with temperature, though deviation from a straight line is sometimes larger than the calibration uncertainty (at 232 °C, in this case):<br />
<figure id="attachment_937" style="width: 605px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/IPRT_interp_graph1.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/IPRT_interp_graph1.png" alt="Calibration results of the above industrial PRT, graphed." width="605" height="340" class="size-full wp-image-937" /></a><figcaption class="wp-caption-text">Calibration results of the above industrial PRT, graphed.</figcaption></figure><br />
What may we conclude from the IPRT data?: If we wish to estimate the error at an intermediate temperature by linear interpolation, the uncertainty at this intermediate temperature should probably be larger than that at the neighbouring calibration temperatures.</p>
<p>INTERPOLATING UNCERTAINTY &#8211; NUMERICAL ESTIMATE</p>
<p>How may we numerically estimate the additional uncertainty caused by interpolation?: If we assume that the value at the interpolated point lies between the two neighbouring calibration values, with an equal probability anywhere in that range, the uncertainty caused by interpolation may be estimated as |corr_1 &#8211; corr_2|, as the full-width of a rectangular distribution. (Note: The assumption that the value lies between the two neighbouring calibration values is not necessarily correct for &#8220;badly behaved&#8221; instruments.) To obtain standard uncertainty, u_interp, divide this by 2√3. Or, for expanded uncertainty, U_interp = u_interp*2 = |corr_1 &#8211; corr_2| / √3. Total uncertainty U_tot = √[U_cal^2 + U_interp^2].<br />
This technique is applied for the digital thermometer (using a more recent, larger, data set) and the IPRT mentioned above, with values interpolated between low, middle and high temperatures (in bold) being compared to measured values at intermediate temperatures:<br />
<figure id="attachment_942" style="width: 1050px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/DG_IPRT_interp.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/DG_IPRT_interp.png" alt="Linear interpolation between bold calibration results, for digital thermometer and IPRT." width="1050" height="186" class="size-full wp-image-942" /></a><figcaption class="wp-caption-text">Linear interpolation between bold calibration results, for digital thermometer and IPRT.</figcaption></figure><br />
Linear interpolation between the bold calibration results differs from the measured corrections or errors at intermediate temperatures (residual = measured &#8211; interpolated) by less than the estimated total uncertainty, suggesting that this approach to interpolating uncertainty is &#8220;safe&#8221;, at least for these two instruments.<br />
For the digital thermometer, the uncertainty of interpolated values is essentially the same as that of neighbouring calibration points (differs less than 5%), while interpolated values have significantly (~ten times) larger uncertainty for the IPRT. This seems like a reasonable approach, without having any deeper knowledge of the interpolating function (reference function) being used. (The only assumption is that the correction or error varies &#8220;more-or-less&#8221; <a href="https://en.wikipedia.org/wiki/Monotonic_function" title="Monotonic function" rel="noopener" target="_blank">monotonically</a> between calibration points.) Note that interpolation may be much more accurate if one does have deeper knowledge of the interpolating function, as reported in the <a href="https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty/" title="Interpolating between discrete calibration points: the effect on uncertainty" rel="noopener" target="_blank">preceding paper</a>, or if one performs a <a href="https://www.researchgate.net/publication/403431171_Fitting_curves_to_thermocouple_and_platinum_resistance_thermometer_calibration_data" title="Fitting curves to TC and PRT calibration data" rel="noopener" target="_blank">least squares fit</a> to the full set of data.</p>
<p>The digital thermometer results above were stable, and those of the IPRT close to linear. What about non-linear results? The tables and graphs below show results for a type R thermocouple (linear) and a type K thermocouple (non-linear):<br />
<figure id="attachment_962" style="width: 935px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/TC_interp.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/TC_interp.png" alt="Linear interpolation between bold calibration results, for type R and type K thermocouples." width="935" height="257" class="size-full wp-image-962" /></a><figcaption class="wp-caption-text">Linear interpolation between bold calibration results, for type R and type K thermocouples.</figcaption></figure><br />
<figure id="attachment_963" style="width: 1357px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/TC_R_K_interp_graph.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/TC_R_K_interp_graph.png" alt="Calibration results for type R and type K thermocouples, graphed." width="1357" height="391" class="size-full wp-image-963" /></a><figcaption class="wp-caption-text">Calibration results for type R and type K thermocouples, graphed.</figcaption></figure><br />
It can be seen that, though the type K&#8217;s results are non-linear (and even slightly non-monotonic) at low temperatures, the residual (= measured &#8211; interpolated) is always smaller than the total uncertainty.</p>
<p>SPACING BETWEEN CALIBRATION POINTS</p>
<p>If one wishes to add negligible uncertainty from interpolation, how far apart may the calibration points be? Consider the following &#8220;generic&#8221; calibration results:<br />
<figure id="attachment_967" style="width: 835px;" class="wp-caption aligncenter"><a href="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/LS_general_interp.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2026/03/LS_general_interp.png" alt="Generic calibration results; correction or error y vs independent variable x." width="835" height="124" class="size-full wp-image-967" /></a><figcaption class="wp-caption-text">Generic calibration results; correction or error y vs independent variable x.</figcaption></figure><br />
If the correction or error, y, changes by at most half the expanded uncertainty, between one calibration point and the next, then the total uncertainty at interpolated points is essentially the same as at the calibration points (differs less than 5%).</p>
<p>CONCLUSION</p>
<p>∙The correction of a UUT at a point intermediate between calibration points may be estimated in a simple manner, by interpolating linearly between the two adjacent calibration values. The uncertainty at this interpolated point may be grossly estimated by adding the difference between the two calibration values in quadrature to the calibration uncertainty. For UUTs with stable corrections, this adds negligibly to the uncertainty, but if the correction varies significantly between calibration points, the resultant uncertainty at the intermediate point is much larger.<br />
∙If the user specifies the calibration points, he takes responsibility for the uncertainty between points [Petersen, &#8220;Principles for Calibration Point Selection&#8221;, <em>NCSLI Measure</em>, Volume 8, No 3, 2013].<br />
∙If the user only specifies the range, the calibration laboratory should suggest calibration points based on<br />
&#8211; instrument manufacturer&#8217;s recommendation<br />
&#8211; understanding of the operating principles of the instrument<br />
&#8211; historical experience with this type or model of instrument (&#8220;type testing&#8221;)<br />
&#8211; in the absence of further information, sufficient points that the UUT correction changes by at most half the expanded uncertainty, between one calibration point and the next (unless some uncertainty is added for interpolation)<br />
&#8211; ideally ,the calibration laboratory should be confident enough to include a statement in the certificate such as &#8220;Effects due to interpolation are considered negligible over the calibrated range.&#8221;<br />
∙The user should agree to the suggested calibration points during contract review.</p>
<p>(Contact the author at LMC-Solutions.co.za.)</p>
]]></content:encoded>
			<wfw:commentRss>https://metrologyrules.com/interpolating-between-discrete-calibration-points-the-effect-on-uncertainty-addendum/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Interlaboratory comparisons (Proficiency Testing) among calibration laboratories &#8211; how to choose the assigned value (Reference Value) &#8211; Addendum</title>
		<link>https://metrologyrules.com/interlaboratory-comparisons-proficiency-testing-among-calibration-laboratories-how-to-choose-the-assigned-value-reference-value-addendum/</link>
		<comments>https://metrologyrules.com/interlaboratory-comparisons-proficiency-testing-among-calibration-laboratories-how-to-choose-the-assigned-value-reference-value-addendum/#comments</comments>
		<pubDate>Mon, 15 Dec 2025 09:54:55 +0000</pubDate>
		<dc:creator><![CDATA[Hans Liedberg]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://metrologyrules.com/?p=803</guid>
		<description><![CDATA[ABSTRACT This publication follows &#8220;Interlaboratory comparisons (Proficiency Testing) among calibration laboratories &#8211; how to choose the assigned value (Reference Value)&#8221; of October 2025, describing additional techniques for visualizing and interpreting results. Participants&#8217; reported probability distributions are combined for visual review, Cox&#8217;s Largest Consistent Subset approach is applied to remove outliers, and the recommendations in this [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>ABSTRACT<br />
This publication follows &#8220;<a href="https://metrologyrules.com/interlaboratory-comparisons-proficiency-testing-among-calibration-laboratories-how-to-choose-the-assigned-value-reference-value/" title="ILC among cal labs - RV" rel="noopener" target="_blank">Interlaboratory comparisons (Proficiency Testing) among calibration laboratories &#8211; how to choose the assigned value (Reference Value)</a>&#8221; of October 2025, describing additional techniques for visualizing and interpreting results. Participants&#8217; reported probability distributions are combined for visual review, Cox&#8217;s Largest Consistent Subset approach is applied to remove outliers, and the recommendations in this and the previous paper are applied to a small Thermometry ILC with four participants.</p>
<p>VISUAL REVIEW OF DATA</p>
<p>First, we continue to discuss the infrared thermometry ILC involving Ref Lab and 13 participants, from the previous paper: The <a href="https://towardsdatascience.com/kernel-density-estimation-explained-step-by-step-7cc5b5bc4517/" title="Kernel density plots" rel="noopener" target="_blank">kernel density plots</a>, that were used to examine the data for items A and D, combine normal distributions with <em>equal</em> standard deviations (widths) around each participant&#8217;s result. (Plots were generated using the <a href="https://www.r-bloggers.com/2023/08/kernel-density-plots-in-r/" title="Kernel density plots in R" rel="noopener" target="_blank">density()</a> function in the R programming language.) What would the plots look like, if the widths were derived from the participants&#8217; estimated uncertainties (which vary by an order of magnitude)? And, what would be the effect of including Ref Lab in the combination (<a href="https://www.countbayesie.com/blog/2022/11/30/understanding-convolutions-in-probability-a-mad-science-perspective" title="Convolution of probability distributions" rel="noopener" target="_blank">convolution</a>) of normal distributions?</p>
<p><a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/10/KDE_A.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/10/KDE_A.png" alt="KDE-A" width="1488" height="313" class="alignnone size-full wp-image-747" /></a></p>
<p><a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Conv_and_Ref.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Conv_and_Ref.png" alt="Convoluted_A" width="1492" height="623" class="alignnone size-full wp-image-807" /></a></p>
<p>For item A, using the participants&#8217; uncertainties (second row of graphs above) tends to &#8220;smear out&#8221; the secondary modes (lower peaks) observed in the kernel density plots, suggesting that results in these areas have larger estimated uncertainties. (The effect is similar to enlarging the bandwidth of the kernel density plot.) Including the Ref Lab results in the combined distributions (third row of graphs) adds or heightens a peak in the 23, 34 and 80 °C data. As Ref Lab has the smallest uncertainties, it produces sharp, high peaks. (This is similar to how the weighted mean is dominated by the results with the smallest uncertainties.) </p>
<p><a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/10/KDE_D.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/10/KDE_D.png" alt="KDE_D" width="1483" height="309" class="alignnone size-full wp-image-750" /></a></p>
<p><a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Conv_and_Ref_D.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Conv_and_Ref_D.png" alt="Convoluted_D" width="1487" height="626" class="alignnone size-full wp-image-809" /></a></p>
<p>For item D, using the participants&#8217; uncertainties removes the secondary peak for 23 °C, splits the main peak for 34 °C into two, and reduces the secondary peak for 40 °C.<br />
For 80 °C, two peaks become three: -3 K (participants P1, P5, P6, P7), +1.5 K (P9, P11) and +3 K (P12, P13). It is interesting to observe that, if their uncertainties are small, just two participants (out of p = 10) can create a significant peak. Ref Lab&#8217;s result at 80 °C lies around halfway between the -3 K and +1.5 K peaks: does this suggest a difference in method between P1-P5-P6-P7 and P9-P11, Ref Lab applying a measurement technique halfway between these two populations? A similar split (P1-P2-P3-P5-P6-P7 vs P9-P10-P11) may be present at 40 °C, but the differences are smaller and therefore less distinct.<br />
(It should be noted that Ref Lab and participants P1 to P8 used blackbody targets with emissivity ≈ 1.00, while P9 to P13 used flat plates with ε ≈ 0.95. The latter results are corrected to ε ≈ 1.00 (corrections ~ 0.7 K around 37 °C and 2.5 K at 80 °C), but perhaps some emissivity-related effects remain.) </p>
<p>A general observation, regarding the convolution of reported results (with widely differing uncertainties) vs the use of a kernel density plot (with equal bandwidth for all results): In practice, calibration laboratories using similar measurement standards and equipment (and therefore having similar measurement capabilities in reality) may report very different uncertainties, because, for example, some accredited CMCs are conservative and others are not. For this reason, it is suggested that reported uncertainties are of little value in visualizing the distribution of results, and a kernel density plot (with equal bandwidth for all results) is recommended.</p>
<p>LARGEST CONSISTENT SUBSET OF RESULTS</p>
<p>The above distributions of results are mostly multi-modal (having several local maxima) and asymmetric. Would the removal of &#8220;inconsistent&#8221; results improve the situation? (Our ideal combined distribution would be symmetric, with only one peak.) [<a href="https://www.researchgate.net/publication/228967818_The_evaluation_of_key_comparison_data_Determining_the_largest_consistent_subset" title="Cox - LCS, 2005" rel="noopener" target="_blank">Cox, &#8220;The evaluation of key comparison data: determining the largest consistent subset&#8221;, 2005</a>] proposes a chi-squared test for consistency between the Weighted Mean (assigned value, y) and the participants&#8217; results, x_i, with the &#8220;worst&#8221; results (largest contributors to chi-squared) being removed one-by-one, until the observed chi-squared value &#8220;passes&#8221; and we are left with the &#8220;Largest Consistent Subset&#8221; (or, LCS):<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Cox_chi-squared_test.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Cox_chi-squared_test-300x93.png" alt="Cox_chi-squared_test" width="437" height="136" class="aligncenter size-medium wp-image-835" /></a><br />
(The threshold value of Χ^2 may be calculated using the spreadsheet function CHISQ.INV.RT(0.05,N-1), where N = number of participants contributing to RV, or the Largest Consistent Subset may be directly determined using the <a href="https://cran.r-project.org/web/packages/metRology/metRology.pdf" title="metRology package in R" rel="noopener" target="_blank">LCS() function in the metRology library</a> of the R programming language.)<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Conv_and_Ref_A_LCS_.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Conv_and_Ref_A_LCS_.png" alt="Conv_and_Ref_A_LCS_" width="1460" height="586" class="aligncenter size-full wp-image-846" /></a><br />
For item A, only the 80 °C results are inconsistent (according to Cox&#8217;s criterion), and the removal of inconsistent results to find the &#8220;Largest Consistent Subset&#8221; (second row of graphs above) does not, unfortunately, create a unimodal or symmetric plot.<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Conv_and_Ref_D_LCS.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/Conv_and_Ref_D_LCS.png" alt="Conv_and_Ref_D_LCS" width="1489" height="597" class="aligncenter size-full wp-image-848" /></a><br />
For item D, all but the 23 °C results are inconsistent. Removing inconsistent results does improve the appearance of the combined probability distributions, though some asymmetry remains.</p>
<p>The LCS approach to finding an assigned value (or Key Comparison Reference Value, KCRV, for a Key Comparison between National Metrology Institutes) is intended to produce the best estimate of the SI value of the measurand [Cox, &#8220;The evaluation of key comparison data: determining the largest consistent subset&#8221;, Metrologia 44 (2007) 187–200]. This is seldom the goal for an interlaboratory comparison (ILC) between commercial calibration laboratories, where each participant simply aims to demonstrate his equivalence to a &#8220;credible&#8221; Reference Value. Bearing this in mind, and considering that the number of participants in a typical calibration ILC is small (7 or less, according to [<a href="https://european-accreditation.org/wp-content/uploads/2018/10/ea-4-21-inf-rev00-march-18.pdf" title="EA-4/21" rel="noopener" target="_blank">EA-4/21 INF: 2018, “Guidelines for the assessment of the appropriateness of small interlaboratory comparison within the process of laboratory accreditation”</a>]), the removal of participants from RV using a chi-squared test is not always reasonable: the reported uncertainties used to calculate chi-squared may be unrealistic, and the resultant number of results contributing to RV may be too small for statistical confidence. Instead, it is suggested that commercial calibration laboratories arranging a small ILC use a &#8220;consensus value from participant results&#8221; [ISO 13528:2015 section 7.7], using &#8220;a subset of participants determined to be reliable, by some pre-defined criteria &#8230; on the basis of prior performance&#8221; [13528 clause 7.7.1.1], with this &#8220;prior performance&#8221; being their reputation in the relevant field. An example of such a small ILC is presented below.</p>
<p>EXAMPLE: A SMALL INFRARED THERMOMETRY ILC</p>
<p>In this example, one infrared thermometer was calibrated from -20 to 150 °C. The pilot laboratory invited the other three laboratories to participate based on their reputation for competence in the field, so that all four laboratories might contribute to RV. Viewing conditions were specified (100 mm from a 50 mm diameter target, or 300 mm from a 150 mm target, etc), so that differing Size-of-Source Effect would not render the results incomparable. Three laboratories submitted multiple result sets (measured by different metrologists), but selected one set to contribute to RV, as required by the protocol. The results are presented below, in the sequence in which they were measured, with secondary results from relevant laboratories being identified as &#8220;x.2&#8243;:<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/IR-2025-05_results.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/IR-2025-05_results.png" alt="IR-2025-05_results" width="403" height="452" class="aligncenter size-full wp-image-866" /></a></p>
<p>VISUAL REVIEW OF DATA<br />
The results are plotted, relative to the mean correction, below:<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/IR-2025-05_symmetry_about_mean_incl_participant_2.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/IR-2025-05_symmetry_about_mean_incl_participant_2-1024x487.png" alt="IR-2025-05_symmetry_about_mean_incl_participant_2" width="474" height="225" class="aligncenter size-large wp-image-868" /></a><br />
The expected tight grouping of results within a laboratory may be clearly seen in laboratories A, C and D. Good agreement between initial and final measurements at laboratory A (results A.1 and A.2) indicates that the thermometer was stable during the one-month circulation.<br />
Kernel density plots of the four &#8220;primary&#8221; results (chosen for inclusion in RV) are shown below:<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/KDP_IR-2025-05.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/KDP_IR-2025-05.png" alt="KDP_IR-2025-05" width="1432" height="294" class="aligncenter size-full wp-image-875" /></a><br />
It is observed that these kernel density plots allow for better visual review of the data than does the graph of results relative to the mean. (There is significant asymmetry in the 80 °C and 120 °C results, which was not obvious in the graph relative to the mean.)</p>
<p>a) U(RV) FROM PARTICIPANT UNCERTAINTIES<br />
Two Reference Values are considered, the mean and the weighted mean. For each laboratory, RV is calculated from the results of the other three, to avoid the laboratory being &#8220;compared to itself&#8221;. The uncertainty of the mean is estimated from reported participant uncertainties by <img src='http://s.wordpress.com/latex.php?latex=%5Cdisplaystyle%201%2Fn%20%5Ccdot%20%5Csqrt%7Bu_1%20%5E2%20%2B%20%5Cldots%20%2B%20u_n%20%5E2%7D&bg=ffffff&fg=000000&s=0' alt='Latex formula' title='Latex formula' class='latex' />, similar to the formula for the <a href="https://sisu.ut.ee/measurement/33-standard-deviation-mean/" title="ESDM" rel="noopener" target="_blank">standard deviation of the mean</a>. The variance of the weighted mean is found from reported participant uncertainties by <img src='http://s.wordpress.com/latex.php?latex=%5Cdisplaystyle%20%5Csigma%20%5E2%20%3D%20%5Cfrac%7B1%7D%7B%5Csum%7B%5Cfrac%7B1%7D%7B%5Csigma_i%20%5E2%7D%7D%7D&bg=ffffff&fg=000000&s=0' alt='Latex formula' title='Latex formula' class='latex' />.<br />
As seen below, either value of U(RV) is smaller than U(LV), for all but participant B.1 at lower temperatures, so tests the capabilities of laboratories A, C and D fairly rigorously. It does not, however, meet the criterion for uncertainty of the assigned value u(x_pt), relative to the &#8220;performance evaluation criterion&#8221; σ_pt, suggested in ISO 13528 clause 9.2.1, namely<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/sigma_pt_vs_U_pt.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/sigma_pt_vs_U_pt.png" alt="sigma_pt_vs_U_pt" width="147" height="30" class="aligncenter size-full wp-image-904" /></a><br />
(If this criterion is met, then the uncertainty of the assigned value may be considered to be negligible.)<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/IR-2025-05_RV1.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/IR-2025-05_RV1.png" alt="IR-2025-05_RV" width="1420" height="350" class="aligncenter size-full wp-image-891" /></a><br />
As mentioned above, the uncertainties reported by commercial calibration laboratories in ILCs differ widely, often without any technical justification. For this reason, it is recommended to use the unweighted mean as the Reference Value. However, as the goal of the ILC is to demonstrate equivalence between participants <em>within their reported uncertainties</em>, U(RV) is estimated from these reported uncertainties, not from the observed spread of results.</p>
<p>b) U(RV) FROM SPREAD OF PARTICIPANT RESULTS<br />
The median is often used as a robust Reference Value for large ILCs, with its uncertainty being estimated from the spread of results via the scaled median absolute deviation (MADe). Is the median an appropriate Reference Value for a small ILC?<br />
<a href="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/IR-2025-05_median1.png"><img src="http://metrologyrules.com/wp1/wp-content/uploads/2025/11/IR-2025-05_median1.png" alt="IR-2025-05_median" width="711" height="365" class="aligncenter size-full wp-image-912" /></a><br />
(Note: In the above table, the median and its uncertainty are calculated using all four results, i.e., not excluding the participant&#8217;s own result.)<br />
It is observed that, while the median may be a reasonable Reference Value for such an ILC, the uncertainty of the median, being derived from the spread of the participants&#8217; results, is not suitable for the task of demonstrating equivalence <em>within reported uncertainties</em>. To achieve this goal, U(RV) should be derived from the reported uncertainties. Also, U(median) is often larger than U(LV), so does not test participants&#8217; capabilities rigorously.</p>
<p>CONCLUSION</p>
<p>∙Kernel density plots are recommended for visual review of data (the first step in analysis of ILC results).<br />
∙The ILC report should present participants&#8217; results in the order in which they were measured, so that readers may themselves look for artefact drift.<br />
∙If a laboratory submits multiple result sets, it should choose one set to contribute to a consensus Reference Value, before having sight of other participants&#8217; results.<br />
∙The ILC report should indicate which result sets are from the same laboratory, so that readers may interpret data clustering correctly.<br />
∙For small ILCs, each participant should be evaluated against a Reference Value derived from <em>other participants&#8217; results</em> (to avoid bias).<br />
∙To promote the credibility of RV, a subset of participants may be pre-selected to contribute to it, on the basis of &#8220;prior performance&#8221; (reputation in the field).<br />
∙If a consensus Reference Value is to be used, the mean is recommended, rather than the weighted mean, as reported uncertainties often vary widely, without technical justification.<br />
∙The uncertainty of a consensus Reference Value, U(RV), should be determined from participants&#8217; reported uncertainties, rather than from the spread of results, to achieve the goal of demonstrating equivalence <em>within reported uncertainties</em>.<br />
∙Technical assessors should ask: Is RV credible? (Is it composed of reputable labs?) Does it omit the laboratory being tested, especially if the number of participants is small? Is U(RV) < U(LV), in order to test the participant&#8217;s capability rigorously?

Contact the author at LMC-Solutions.co.za.

</p>
]]></content:encoded>
			<wfw:commentRss>https://metrologyrules.com/interlaboratory-comparisons-proficiency-testing-among-calibration-laboratories-how-to-choose-the-assigned-value-reference-value-addendum/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
