Control of measuring equipment – the requirements of ISO 9001

Once a measuring instrument is purchased and successfully installed in an ISO 9001 certified organization, the responsible staff member might consider his job done. However, to satisfy ISO 9001’s requirements for control, measuring equipment used for product and process verification requires attention throughout its lifespan. This article discusses what needs to be done, and how to do it thoroughly, but cost-effectively. (Note: Clause numbers below are from ISO 9001:2008.)

In section 7 (Product realization) of ISO 9001, clause 7.6 deals with “Control of monitoring and measuring equipment”. Firstly, it states the purpose of monitoring and measuring, namely, “to provide evidence of conformity of product to determined requirements”.

“Where necessary to ensure valid results, measuring equipment shall
a) be calibrated or verified, or both, at specified intervals, or prior to use, against measurement standards traceable to … national measurement standards… the organization shall assess and record the validity of the previous measuring results when the equipment is found not to conform to requirements… Records of the results of calibration and verification shall be maintained.”

Basically, the equipment user must manage his risk of making incorrect measurements (“control”).  As zero risk is impossible (risk management = “tolerable” risk), he must also be able to assess the impact when measuring equipment is found to have drifted outside required limits. As in all quality management systems (QMS), record-keeping is essential.

“Verification” means “provision of objective evidence that a given item fulfils specified requirements” [International vocabulary of metrology (VIM), clause 2.44]. In this case, the item is a measuring system that is required to be sufficiently accurate. (To be more precise, the measurement results must be traceable to a national measurement standard, with a small enough measurement uncertainty.)

The equipment user must decide the interval between calibrations or verifications, taking into account regulations, conditions of use, advice from measurement experts, etc.

A calibration or verification is “traceable” [VIM, 2.41] when it is
(i) performed by competent personnel,
(ii) according to a documented & validated procedure (work instruction, SOP),
(iii) using traceable measuring equipment,
(iv) with a proper estimation of measurement uncertainty.

(i) How are personnel proven to be competent? By suitable training records.
(ii) How is a calibration procedure validated? By data, showing the capability of the personnel, procedure and equipment to achieve the claimed measurement uncertainty. Typically, a proficiency test (“PT”) or interlaboratory comparison is carried out.
(iii) How is the measuring equipment used to perform calibration (“measurement standards”) made traceable? It is itself calibrated, by
1) a National Metrology Institute (NMI) with a suitable Calibration and Measurement Capability (CMC) for this parameter published in the BIPM Key Comparison Database (KCDB), or,
2) a calibration lab accredited to ISO 17025 by an ILAC affiliate [ILAC P10:01/2013, sections 2 & 3].
(iv) How is the uncertainty of measurement estimated? The calibration procedure usually lists the main components of uncertainty and describes the manner in which they may be estimated. These components are combined using an internationally agreed method [EA-4/02 Evaluation of the Uncertainty of Measurement in Calibration].

Note the distinction between
a) measuring equipment used for product and process verification, and
b) measuring equipment used to perform calibration (“measurement standards”).
Item b) might be, for example, a “master gauge” used to check (calibrate or verify) other gauges in the factory: This measurement standard’s traceability should satisfy ILAC requirements, including accreditation of the calibrating laboratory to ISO 17025.
Item a) is all the other gauges used in the factory: These items may be calibrated “in-house” (even if the organization is not accredited as a calibration laboratory), but the process must still satisfy requirements (i) to (iv) [SANAS TR 25, section 3.3].

How to satisfy ISO 9001’s calibration/verification requirement in a cost-effective manner:

1. If possible, consider this “continuing cost of ownership” aspect when selecting which equipment to purchase. (This applies not only to measuring equipment, but also to the systems in which they are installed.) For example, sensors with “standard” dimensions and fittings will probably fit more easily into calibration apparatus. An additional access port, allowing connection of a measurement standard next to the working measuring equipment, may allow in situ calibration, saving downtime and effort.

2. Choose measuring equipment with appropriate drift specifications: better specs than necessary will mean unjustified extra expense, while equipment with poor specs will require more frequent calibration/verification with attendant wastage of time and money. This highlights the value of understanding how the various uncertainties in system behaviour affect the time or cost of the process and the quality of the final product. For example, it is probably not useful to install a temperature sensor accurate to 0.1 °C if the temperature controller has a cycle of ±5 °C; a sensor accurate to 1 °C or 1.5 °C would probably do.

3. Consider cost and manhours when choosing external (accredited) or in-house calibration. Remember that in-house calibration requires personnel to be trained, a procedure (including estimation of measurement uncertainty) to be documented, measurement standards to be available and records to be kept. Most organizations adopt a “hybrid” approach: Master gauges, and equipment requiring complex calibration procedures, are calibrated by an external, ISO 17025-accredited, calibration laboratory. Low-accuracy, simple (“factory floor”) measuring equipment is calibrated in-house.
This approach is practical, as the organization’s own personnel need not undergo high-level metrology training, and it is cost-effective, as expensive, high-accuracy, measurement standards need not be maintained in-house, while the bulk of measuring equipment need not be submitted to costly external calibration.

An example – Calibration of Resistance Temperature Devices (RTDs):

The organization chooses to purchase one master RTD with digital display (“master thermometer”). They have it calibrated by an ISO 17025-accredited calibration laboratory, over their operating temperature range (-40 °C to 200 °C), with a calibration uncertainty of 0.1 °C. They decide to send it out for recalibration annually, as the manufacturer’s one-year drift specification for the display is 0.2 °C, and they want to use it to calibrate their working RTDs to an uncertainty of ≤ 0.3 °C.

They also procure a one litre, wide-mouth vacuum flask and an ice crusher, so that they are able to prepare an ice point from available ice cubes and tap water. After experimenting on how best to achieve this with their equipment, they prepare a procedure describing preparation of the ice point and measurement of an RTD therein. The procedure highlights sources of error which may significantly affect the result and explains how to prevent or detect such errors. It also explains how to estimate the uncertainty of calibration of the RTD.

Having received their master thermometer with its calibration certificate, they measure it at the ice point according to their procedure (and estimate the uncertainty of measurement). They then prepare a report comparing their result to that of the external, accredited calibration laboratory at 0 °C: this constitutes an interlaboratory comparison (ILC), demonstrating the measurement capability of their personnel, procedure and equipment when calibrating an RTD in an ice point. The senior staff member files this ILC report in his training records, as evidence of his competence. Junior staff members then compare their own measurement results to that of the senior staff member (intra-laboratory comparison), with the criterion for acceptance being that the results agree within the combined uncertainties. Based on this documented evidence, the organization authorises the relevant staff members to perform in-house calibration of RTDs.

The junior staff members then perform in-house calibration of the organization’s working RTDs at 0 °C every three months (the interval being chosen to adequately manage the risk of measurement error) and record the results in a controlled document, with control limits of ±0.3 °C being imposed. (In other words, a working RTD passes if it reads between -0.3 °C and +0.3 °C at the ice point.) The working RTDs are of tolerance class C, that is, at the time of manufacture they complied with the temperature/resistance tables published in IEC 60751 within ±(0.6 + 0.01∙|t|) °C, over the range (-50 to 600) °C. As RTDs falling within these limits are acceptable to the organization, the single-point calibration/verification demonstrating compliance at 0 °C (difference < 0.3 °C, with expanded uncertainty of 0.3 °C) is judged to be sufficient to control temperature measurement accuracy over the full range of operation. (The master thermometer is calibrated over this full range, in case any working RTD exhibits suspicious results and it is desired to perform further evaluation at other temperatures.)

(Contact the author at lmc-solutions.co.za.)