The verification process is an important precondition in the pursuit to ensure reliable, high quality laboratory test results and ultimately increase patient safety. The term verification encompasses the provision of objective evidence that a measurement procedure/measuring system meets the manufacturer´s performance requirements (1). As an essential requirement, it is embedded in the International Standard ISO 15189 intended specifically to guide the management of quality systems in medical laboratories (2, 3).
According to the IVD Medical Device Directive 98/79/EC, manufacturers of in vitro medical devices should perform the validation of measurement methods/systems marketed in Europe (4). In order to ensure patient safety and compliance with ISO 15189 accreditation requirements, medical laboratories have the responsibility to independently confirm performance properties stated by the manufacturer before implementation of an examination procedure into routine practice (5). Verification of performance properties comprises an appropriate design and execution of a series of experiments in order to ensure that the measurement procedure or measuring system meets the manufacturer´s claims and confirms that it is fit for the intended use. However, the optimal extent of the verification procedure (i.e. a detailed description of the procedure with regard to different measurement methods/measuring systems) applicable for all laboratories, and the mathematical basis of calculations used are not specified in the ISO 15189 standard (3, 6). The absence of a comprehensive guideline that could assist medical laboratories in designing and performing each step of the verification process results in widely different definitions of local policies for verification purposes. Since locally tailored procedures mainly rely on individual interpretation of available documents describing the verification procedural workflow, they are often inconsistent and not balanced in terms of heterogeneity of method types, local technical capabilities, increased workload, costs and risks.
Harmonization of verification procedures might be accomplished by compiling national documents, taking into account specific aspects and demands of local laboratories while equilibrating them within the mandatory requirements of the ISO 15189 standard (7). Considering that these issues have not been adequately addressed in Croatia, the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) and Croatian Chamber of Medical Biochemists (CCMB) formed a Working group for method verification and validation (WG VV). The specific goals of the WG VV can be summarized as follows: 1) to identify current policies and practices used in performing verification experiments among medical biochemistry laboratories (MBLs) in Croatia, 2) to identify main problems related to the performance of verification, and 3) to propose national recommendations that would facilitate and guide Croatian MBLs while performing method verification. The ultimate goal is to harmonize verification procedures on the national level while taking into account the difficulties encountered in small MBLs as well as the capacities of large MBLs. Accordingly, we hypothesized that verification procedures performed in Croatian MBLs differ widely from laboratory to laboratory. Thus, the aim of this survey was to accomplish the first goal of the WG VV, i.e. to gather specific data on steps used in the verification workflow and problems occurring while performing method verification in Croatian MBLs.
Materials and methods
In order to obtain data on verification procedures used in Croatia, two rounds of the survey were conducted using the online survey platform SurveyMonkey (SurveyMonkey Inc., Palo Alto, USA). First, a pilot survey was sent to 15 randomly chosen managers of MBLs belonging to different health care settings. The questionnaire contained 16 questions with predefined answers and allowed comments input. This pilot survey aimed to identify the appropriateness of the questions asked. Based on these first results, the members of WG VV prepared a second questionnaire intended for the second round of the survey which contained 29 questions/statements. The second survey was disseminated among managers of all MBLs in Croatia (N = 196) during February 2019. The managers were asked to fill in or select one or more of the proposed responses.
The second survey was divided in two sections. The first section included questions/statements on the type of laboratory (i.e. health care setting), accreditation status and implementation of verification procedures. The second section included detailed questions/statements related to the specific steps of the verification workflow performed in each individual MBL which implemented verification policies in routine practice.
A total of 108 laboratories participated in the survey, which represents 55% of MBLs in Croatia. The general characteristics of the participating MBLs are presented in Table 1. Most of the surveyed MBLs (81%) had been supervised by the CCMB, which mandates the fulfilment of a series of standards adopted from the ISO 15189. However, to date, only 10 (9%) of surveyed MBLs are accredited according to the ISO 15189 standard. In the last ten years, the majority of the participating MBLs declared to have implemented into routine practice one of the following: 1) a new analytical system (104/108, 96%); 2) a new method for an existing examination procedure (66/108, 61%); 3) a newly implemented examination procedure (76/108, 70%); 4) a new generation assay (63/108, 58%); 5) an assay of a different manufacturer (60/108, 56%). Only one MBL declared none of the above. Fourteen MBLs (13%) stated not to have implemented verification procedures in their routine practice, and were not surveyed in the second section. The main reasons for not implementing verification procedures, as stated by the participating MBLs, were organizational (due to lack of staff or time), absence of a document harmonizing verification procedures, lack of economic resources and lack of availability of the necessary data from the manufacturer.
One third (35/94) of the MBLs declaring to implement verification procedures in the first part of the survey were excluded from the second part of the survey due to absence of answers (individual or all). The results obtained from the remaining laboratories, participating in the second part of the survey related to minimal verification requirements, are summarized in Table 2.
One third of the responding MBLs (18/59, 0.31) stated to have difficulties when defining acceptance criteria in order to assess verification results, despite the availability of a defined hierarchy of criteria (8). The most prominent problems linked to this crucial step of the verification process, as stated by the responding MBLs, were: a) availability of multiple sources of acceptance criteria which may be confusing; b) analytes lacking available biological criteria; and c) sometimes criteria cannot be achieved and thus minimal criteria should be defined.
For precision estimation, 27/59 (0.46) of MBLs used the Clinical and Laboratory Standards Institute (CLSI) EP15-A2 protocol - triplicate measurements of two concentrations daily for five days (9). Five MBLs perform triplicate measurements of three concentrations daily for five days, 4 MBLs one measurement of two concentrations daily for five days. Two MBLs perform 10 serial measurements of 3 concentration levels for five days while the remaining 21 MBLs declared to use 20 different combinations of replicates, concentration levels and days used in precision experimental design.
Two questions were related to the specifics of method comparison protocols used in the MBLs surveyed. A median of 20 samples (95% confidence interval (CI): 1-50) is used for method/analyser comparisons. The declared average time needed for comparison studies to be completed was 5 days (95% CI: 1-30). The majority of the responding MBLs (18/54, 0.33) use 20 samples in comparison experiments and need 1-30 days for sample/data collection. Fifteen MBLs declared to use 40 or more samples in comparison studies with a duration of 5-30 days. Eight MBLs use 30 samples through 5-20 days, and 6 MBLs use 10-15 samples through 3-20 days. Seven MBLs stated that they use 2 or less samples for method comparison protocol. As for the suitability of samples for comparison studies, 37/54 (0.69) MBLs can ensure samples with results covering the entire measuring range, while 13/54 (0.24) MBLs declared to collaborate with other MBLs in order to be able to obtain samples with concentrations covering the measuring range stated by the manufacturer. A half (27/54, 0.50) of the responding MBLs declared to implement dilution experiments (samples with high analyte concentrations, near the upper limit of the measuring range) in order to obtain comparison results.
Table 3 presents the results of the MBLs responding to the final part of the survey related to the verification of performance characteristics that might be included in a more extended (rigorous) approach. Out of 35 MBLs not verifying linearity of the measuring range, ten later stated to perform linearity verification but not for the entire range declared by the manufacturer. The most prominent problems encountered when performing a verification process, according to the participating MBLs, are: 1) financial issues associated with the experiment (41/53; 0.77); 2) collection of samples with concentrations covering the methods’ entire measuring range (38/53; 0.72); 3) lack of national guidelines detailing the steps of the verification process (37/53; 0.70); and 4) lack of a statistical software in order to ease the calculations (35/53; 0.66).
This survey was conducted in order to screen current practices for verification of examination procedures among Croatian MBLs. Valuable information was gathered on specific aspects related to the verification procedure, and as expected, our main finding were the heterogeneous verification protocols used in MBLs across Croatia.
The majority of the responding MBLs were non-accredited primary health care MBLs which is expected given the high proportion of small MBLs due to the geographical properties of Croatia and the non-mandatory requirement for accreditation in Croatia. Most of the surveyed MBLs have been supervised by the CCMB, which means they have implemented a series of quality standards compiled according to ISO 15189. Thus, although not accredited, most MBLs in Croatia implemented a quality management system based on ISO 15189 which should include the verification of examination procedures. However, 13% of surveyed MBLs declared not to verify examination procedures which is in contrast to the accreditation encouragement in Croatia and favourable accreditation trend found in other members of the European Federation of Clinical Chemistry and Laboratory medicine (EFLM) (10). The most prominent reasons for not implementing verification into routine practice were organizational (technical) and financial; however, the lack of a national document harmonizing verification procedures was also emphasized.
Verification is predominantly performed by most MBLs when a new analytical system, a new assay or a different method for an existing assay is implemented. However, about one third of participating MBLs stated not to perform verification in situations when a new generation assay or a new sample type for an existing assay is implemented. In general, when modifying an examination procedure after it has been verified, the impact of such modification on the intended use should be carefully evaluated (preferably using a risk analysis approach) (3, 7). If the modification is found to be relevant and potentially affecting the performance characteristics (as in the case of a new assay or sample type), a supplemental verification should be performed (3, 7, 11).
Furthermore, about 40% of participating MBLs in the second part of the survey declared not to document verification protocols for each examination procedure and only 27% of MBLs include an explanation if individual specifications are not examined. It has to be emphasized that a structured verification report (including the intended use of the examination procedure, description of the measurand, experimental plan including relevant performance characteristics, acceptance criteria, raw experimental data, concluding remarks and identity of the investigators) is an important element of the verification procedure which allows traceability of verification results to the requirements stated in the standard (7).
Acceptance criteria are defined based on the intended use of the examination procedure and should be defined prior to the verification experiment (7). The results of our survey show that the majority of the participating MBLs find defining acceptance criteria not an issue. Preferred sources of acceptance criteria are biological variation data, manufacturer’s claims and, finally, clinical recommendations. However, one third of the responding MBLs declare defining acceptance criteria an issue, and identify the availability of multiple sources of criteria and the lack of biological data as main problems. The majority of the responding MBLs find that measurement procedures which do not meet the predefined verification criteria are not suitable for implementation into routine practice; however they also tend to repeat the verification process by using either a new set of same samples or a new set of different samples. Each verification result that doesn’t meet the acceptance criteria should be observed through risk analysis. It is necessary to verify the probability, severity, and the meaning of the given result if the method is to be accepted for routine work. In terms of verification extent (i.e. which characteristics to verify) and specific ways how to perform verification (i.e. how to evaluate each performance characteristic) the ISO 15189 standard is merely a framework (1, 7). The responsibility of determining the extent of verification and evaluating the results obtained are left to laboratory professionals of individual MBLs. This allows the design of verification experiments that meet the requirements without becoming too prescriptive, but at the same time causes great heterogeneity in terms of experiments performed, as demonstrated by our results. The majority of participating MBLs include minimal protocols when verifying an examination procedure: imprecision and bias. Their combination can be used to assess measurement uncertainty in order to provide information on the examination’s accuracy (or lack of it). Imprecision depends on local conditions and thus must be assessed experimentally (3, 7). Participating MBLs stated that imprecision is preferably tested using commercially available control materials, and only in their absence using patients’ samples, although patient samples and pooled patient samples are the best available materials for imprecision studies (9). Furthermore, the design of local protocols for imprecision testing varies widely in participating MBLs potentially affecting the reliability of repeatability and within-laboratory imprecision estimates.
The majority of participating MBLs estimate bias preferably using internal quality control samples (and their assigned values), followed by using data from external quality assessment and method comparison studies using patient samples. The first approach is invalid for the assessment of bias, as is the approach of using calibrators of different lots, declared by 7 participating MBLs. Interestingly, four MBLs declared to use certified reference materials for estimating bias. For bias estimation, samples with known “true” concentrations of the measurand (e.g. reference standards, external quality survey materials, interlaboratory quality control programs) are recommended (12, 13). However, due to its practicality, the most common approach for bias estimation is the comparison procedure using patient samples. It is important to keep in mind that this approach implies estimation of difference between the candidate and comparative measurement procedure and not the actual bias (13). Only 15 MBLs stated to follow the CLSI protocol for comparison studies using patient samples, i.e. a minimum of 40 samples. The majority of MBLs declared to perform a comparison procedure using less than 30 samples (some reaching as low as 2 samples) which is unacceptable since it might seriously affect the soundness of statistical analysis and jeopardize the confidence of the results obtained (13). Samples for comparison studies should be chosen carefully in order to cover the entire measuring range and to comply with measurand stability (11, 13). If obtaining samples with concentrations covering the measuring range is not possible, participating MBLs declare to collaborate with other laboratories in order to fulfil this requirement. However, 81% of the participating laboratories stated not to verify the stability of the samples used for comparison studies which might unnecessarily introduce sample stability as a variable and compromise the results of the comparison study (13).
Although it is not realistic to expect that each laboratory will develop its own RI, the adequacy and usefulness of available RI from other sources should be verified in each individual laboratory (14). Croatian MBLs mainly adopt RI provided by three sources: the CCMB, manufacturers and/or those available from the literature (15). Accordingly, the practice about RI verification were different among laboratories. It is expected that the verification report contains a conclusion of the RI to be applied (e.g., applicability in relation to the method and/or patient population). If MBLs want to verify manufacturers or literature’s data, the verification procedure is not a time consuming and costly task for a majority of analytes and may be performed with as few as 20 samples from reference individuals or using data from the method comparison study (14).
Detection capability estimates are important performance specifications for measurement procedures which need to reflect quantitative reliability in the low end of the measuring range. The decision on which estimate to verify (limit of blank (LOB), limit of detection (LOD) and/or limit of quantification (LOQ)), depends on the particular measurement procedure (16). For example, LOD and LOQ are important characteristics when extremely low amounts of the measurand have clinical significance (disease diagnosis and screening, presence or absence of substances, etc.). Unfortunately, our results showed that Croatian MBLs in general do not implement the estimation of LOB, LOD and LOQ in their verification protocols.
Linearity is usually provided by manufacturers and its inclusion in the verification protocol is not mandatory (3, 5, 17). About 60% of the responding MBLs declared not to include this key analytical characteristic in their verification protocol.
A qualitative test gives a binary result, i.e. positive/negative. Some qualitative tests produce a numerical response/ratio which is translated into a dichotomous result by comparison to the corresponding cut-off value. Usually, the manufacturers provide cut-off concentration values for different purposes (screening, diagnosis, disease management) but sometimes recommend to the laboratory the establishment of their own cut-off values (5, 18, 19). According to the responses related to the verification protocols for qualitative test, the majority of participating MBLs do not implement the estimation of cut-off values accuracy and evaluation of diagnostic performance characteristics in their verification protocols. These results are quite surprising and indicate the urgent need to address this issue with a comprehensive national document.
Measurement uncertainty depicts a values range where a true value of the measurand (or a quantifiable property of the analyte) could be expected with a given probability (20). It is easily included in the verification protocol since it can be estimated from data obtained during the verification experiment. Despite the proposed purpose of the national document for the estimation of measurement uncertainty, harmonization has not taken root in practice, since 60% of the responding MBLs declared as not using these national guidelines.
Our study has some limitations that need to be pointed out. Firstly, the response rate of our nationwide survey was 55%, which means that our results may not be representative of all Croatian MBLs. Furthermore, data from our participants were self-reported and thus could not be independently verified. Additionally, since this is the first survey investigating national verification procedures, we were unable to compare our results to similar studies. Thus, the obtained results were compared and discussed in relation to available recommendations mainly issued by the CLSI or EFLMs position papers.
In conclusion, heterogeneous verification protocols are routinely implemented in Croatian MBLs. This confirms that a national document on verification of examination procedures, tailored and optimized specifically for local demands and circumstances, might help not only in the harmonization of verification procedures but also facilitate the verification process in order to achieve that all MBLs in Croatia perform verification of examination procedures.