Test Method Validation for Visual Inspection of CDI Components in Incoming Quality Assurance
A cardiovascular device manufacturer required validated evidence that Incoming Quality Assurance (IQA) operators could reliably distinguish between acceptable and unacceptable CDI (Cardiovascular Device Incorporated) components during incoming inspection. The challenge involved validating visual inspection methods across 118 PIP (Part Inspection Plan) features categorized into 15 distinct inspection categories including adhesive placement, color verification, wire inspection, barcode labels, label printing, presence/absence checks, and surface finish. Critical concerns included ensuring operator repeatability and reproducibility across varying component types, preventing acceptance of defective incoming materials, and maintaining compliance with medical device quality requirements.
Project Metrics
Tools & Standards
Problem Definition
A cardiovascular device manufacturer required validated evidence that Incoming Quality Assurance (IQA) operators could reliably distinguish between acceptable and unacceptable CDI (Cardiovascular Device Incorporated) components during incoming inspection. The challenge involved validating visual inspection methods across 118 PIP (Part Inspection Plan) features categorized into 15 distinct inspection categories including adhesive placement, color verification, wire inspection, barcode labels, label printing, presence/absence checks, and surface finish. Critical concerns included ensuring operator repeatability and reproducibility across varying component types, preventing acceptance of defective incoming materials, and maintaining compliance with medical device quality requirements.
System Constraints
Engineering Decisions
Categorized 118 PIP features into 15 homogeneous inspection categories
Created 42 test samples (29 photos + 13 physical parts): 25 'True-Pass', 17 'True-Fail'
Implemented MRD ranking system to identify worst-case inspection scenarios
Selected minimum one PIP per category ensuring representative coverage
Used randomized presentation order to minimize learning effects and bias
Conducted testing over multiple days to simulate real-world inspection variability
Replaced specific samples between Rev A and Rev B to address operator recognition bias
Validation Strategy
- Rev A execution unsuccessful: Both associates passed a 'True Fail' sample, requiring protocol revision
- Rev B execution successful: All 'True Pass' units correctly identified (except 1 sample by Associate 1 on Trial 1)
- All 'True Fail' units (17 samples × 2 operators × 2 trials = 68 evaluations) correctly identified as Fail
- Inter-associate consistency AC1: 0.9683 (p < 0.001)
- Intra-associate consistency: Associate 1 AC1 = 0.9692, Associate 2 AC1 = 1.00 (both p < 0.001)
- Overall consistency AC1: 0.9846 (p < 0.001), exceeding threshold of 0.7
- Stability demonstrated across multi-day testing interval
- System suitability confirmed through Attribute Gage R&R
Risk & Mitigation
Final Outcome
Successfully validated visual inspection methods for incoming CDI components across all 15 inspection categories with near-perfect operator agreement (AC1 = 0.9846). Demonstrated that trained IQA operators can reliably detect component defects including adhesive placement errors, color variations, wire defects, barcode/label issues, and surface finish problems. The validation covered 74 Part Inspection Plans and 118 PIP features, establishing a robust incoming inspection process with documented evidence of operator capability and process reliability for regulatory compliance. The iterative validation approach (Rev A failure leading to Rev B success) demonstrated continuous improvement and rigorous quality standards.
