Integrated Analysis of Quality Use of Pathology Program (qupp) Final Reports


RCPA – Quality Assurance Programs Key Indicator Project (2004)



Yüklə 0,88 Mb.
səhifə14/52
tarix11.08.2018
ölçüsü0,88 Mb.
#69188
1   ...   10   11   12   13   14   15   16   17   ...   52

RCPA – Quality Assurance Programs Key Indicator Project (2004)

Description


This pilot project developed a mechanism to identify poorly performing laboratories on a continuing basis. It arose from recommendation 5.1 of the Corrs Chambers Westgarth report titled The Evaluation of Australian Pathology Laboratory Accreditation Arrangements for the Australian Government Department of Health and Ageing 2002 (Corrs Chambers Westgarth Lawyers) which states:
that the DHA and the HIC seek the cooperation of the RCPA QAP to establish explicit external quality assurance performance criteria, initially in chemical pathology and gynaecological cytology, and a mechanism for the RCPA QAP to identify relatively poorly performing laboratories.
This pilot was the first stage in this process and developed Key Performance Indicators (KPIs) for chemical pathology and cytopathology that could potentially be used to identify an early warning alert system of poor laboratory performance.

Grant Recipient


Royal College of Pathologists of Australasia Quality Assurance Programs Pty Ltd (RCPA QAP)

Aims and Objectives


  • to determine whether quality assurance data can be used as an early indicator of poorly performing laboratories

  • to develop a model or indicator that will select the laboratories most probably needing review

  • to determine whether the same or a similar model or indicator can apply to all disciplines of pathology.


These aims and objectives were achieved by this project.

Outcomes


  • A National Pathology Accreditation Advisory Council (NPAAC) Steering Committee was established to oversee the project, and two KPI working groups for each discipline (chemical pathology and cytopathology) were established to assist the project.

  • The project groups and Steering Committee determined the methodology for the pilot should be based on a ‘peer review process’ rather than a ‘standards’ approach.

  • Laboratories were reviewed individually to ensure their assessment included only relevant data in the following four step process:

    1. Selection of the peer group of the laboratory under review.

    2. Selection of the assessment process.

    3. Process the laboratory’s data, their peers and ranking performance.

    4. Consolidate all data to obtain a KPI.

  • It was considered that a need existed to develop a more detailed feedback of quality assurance (QA) performance to help laboratories improve. It was envisaged the Peer Review Committee (PRC) would address this issue. Most laboratories would also be complemented on good performance, with this program identifying potential problems earlier, but also providing practical and useful guidance to participants.


Chemical Pathology Program

  • The Chemical Pathology Program compared a laboratory’s performance with all laboratories in their peer group (those that did the same testing) and then ranked their performance against each other.

  • The Quality Assurance Program (QAP) data was modeled using various statistical tests. The best tool to develop the KPI ranking across the whole laboratory for all analytes was considered to be the Coefficient of Variation (CV%).

  • A number of key/critical analytes were identified and ranked for each laboratory using ‘Total Error’.

  • All laboratories would receive a report of their performance.

  • The set of assessment criteria recommended for Chemical Pathology was:

    • a KPI<0.05

    • a participation rate in the QAP of <80%

    • a late result rate of >10%

    • a result correction rate of >5%

If the laboratory met these criteria they would be referred to the PRC to assess if there was a problem.

  • The program required some further minor refinements, but was anticipated to be implemented on a trial basis in 2005.


Cytopathology KPI Program

  • A similar process to the Chemical Pathology Program was developed with each laboratory ranked against its peers.

  • It was only modeled in Gynaecological Cytopathology, and required further refinement over the following 12 months.

  • The recommended assessment criteria for Cytopathology laboratories was:

    • a KPI rank in the lowest 10th percentile of their peer group

    • more than two major errors in any program

    • less than 75% of results submitted

    • more than 50% of results submitted after the deadline.

If a laboratory was identified as falling inside the assessment cutoff ranges they would be referred to a PRC.

  • Four scores were created for ranking laboratories determined from a set of 20 slides. A diagnosis was supplied by the laboratory for each slide which was scored depending on whether the target response (10), an acceptable response (9) an unacceptable response (4) or a major error (-1) was reported.

  • Review of the value of the scores may be required to ensure they provided the required discrimination of performance. Further attention may be required in the classification of acceptable, unacceptable responses and those considered major errors.

  • It was suggested that as cytopathology would not have sufficient data for analysis in a six-month timeframe, that rolling 12 month data be reviewed every six months.

  • These guidelines will be extended and modified as required for non-gynaecological cytology.


Peer Review Committee

  • The Peer Review Committee (PRC) was to be established for each discipline and would form a sub-committee of the Medical Testing Accreditation Advisory Committee (MTAAC) of the National Association of Testing Authorities (NATA).

  • The PRC was to be made up of:

    • two RCPA Fellows from the relevant disciplines

    • two other relevant professionals

    • two representatives from the relevant QA Office, one being the current chairperson

    • one NATA representative

    • one Government representative.

  • The PRC’s structure was to be trialed in the next stage of the Pilot Project, with no pecuniary action taken until the process had been agreed to by the Australian Government Department of Health and Ageing (DoHA).

  • It was the responsibility of the PRC to determine if the laboratories referred to it were performing badly, or if they were at the bottom of a very high performing group of laboratories. If there was a concern they would be referred to NATA for an accreditation visit or desk audit.

  • If a laboratory is enrolled in a non-RCPA QAP quality assurance program, they should still be required to submit reports similar to those provided by RCPA QAP Pty Ltd to the MTAAC Peer Review Subcommittees.

Findings


  • There are tests that are generally done well by all laboratories and tests that are not done well by any laboratories. This is due to the development stage or level of the test. Poor performance of a laboratory should be based on the laboratory, and not on the analyte. Based on this information, laboratory performance should be measured by the tests that are generally performed well by all laboratories.

  • KPIs are a tool to assist in identifying laboratories that may need review. No computer program has the subtlety to unerringly indicate a mandatory laboratory review. There is acceptance that even after the peer review process had suggested there was a problem, the KPI would only be used as a flag to notify NATA that the laboratory required some form of review, whether as a formal visit, paper audit or somewhere in between.

  • The Chemical Pathology quality assurance program (QAP) analysis and report appeared to have a high level of acceptance by the profession and participants.

  • The Cytopathology KPI Program was more difficult to develop because it is a qualitative (opinion-based) specialty rather than a quantitative (measurable) specialty.

  • The implementation of the Cytopathology KPI Program required a cultural shift amongst cytologists, and may require a review of the QAP. Previous QAPs have been viewed as a “test and teach” rather than measuring performance.

  • Due to the small number of cytologists in any laboratory, KPI testing may be viewed as an individual assessment rather than a laboratory assessment. Considering the ‘pilot’ nature of the program, this has caused concerns with some laboratories.

  • Cytopathology suffers from the difficulties of providing suitable test materials in the quantities and variety that create equitable assessment among laboratories due to the qualitative nature of the testing process. Some of the difficulties may be rectified in the future with the introduction of virtual microscopy.

  • Careful interpretation of the KPI data is essential. Poor performance must be comprehensively assessed and it is possible that in some cases the poorest performers may still be satisfactory for patient care.

Recommendations


  1. Extend the project to other disciplines, with blood-banking the next recommended area to undergo this process.

  2. Funding options beyond the pilot program should be investigated.

  3. Educate the profession about the project and further refinement over time of the KPI process.

  4. Inform other QA providers of this initiative.

  5. Further studies would be invaluable as the process proceeds, particularly for:

    1. ‘critical’ analytes

    2. size of laboratory

    3. methods to identify ‘gaming and collusion’

    4. year-to-year consistency.

  6. Finalise software development on data from 2002 onwards.

  7. Process and distribute January to June 2004 data.

Key Project Learnings


  • It was felt that while graphical reporting was initially useful in ranking laboratories, the KPI plot should not be reported back to the laboratories to avoid the potential to promote their ‘ranking’ for commercial advantage. As the purpose is primarily to identify potentially poorly performing laboratories, the added information regarding relative rankings was considered irrelevant and may raise additional concerns.

  • Determining the cut-off point for ‘poorly performing’ laboratories was not an easy task. They were subjected to several models and several rounds of evaluation.

  • The RCPA QAP needed to obtain legal advice regarding the feasibility of a waiver with respect to liability for laboratories that signed up for participation in this project. This is because laboratories whose KPI reports were submitted to the PRC, followed by a subsequent inspection by NATA and any other steps in the process, may otherwise resort to suing the RCPA QAP for economic loss.

Follow on Initiatives and Projects


Pilot Laboratory Assessment and Peer Review Mechanism for Pathology Key Performance Indicators (2007).


Yüklə 0,88 Mb.

Dostları ilə paylaş:
1   ...   10   11   12   13   14   15   16   17   ...   52




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin