PurposeTo establish rigorous, clinically sound, and reproducible standards for the acquisition, verification, processing, and interpretation of blood-derived laboratory data using artificial intelligence (AI). These guidelines are intended to support safe, ethical, and standardized integration of AI technologies into diagnostic workflows and may serve as a foundation for broader industry-wide adoption.
1. ScopeThis standard applies to all AI-supported systems used for the analysis and interpretation of blood test results, including but not limited to:
- Complete Blood Count (CBC)
- Comprehensive and Basic Metabolic Panels
- Lipid and Liver Panels
- Coagulation Studies
- Immunological, Endocrine, and Inflammatory Markers
- AI-based clinical decision support systems using blood data for risk stratification or diagnostic hypothesis generation
2. Data Handling Protocols2.1 Data Acquisition- Laboratory results must be obtained from ISO 15189-accredited laboratories or equivalent.
- All test results must be accompanied by structured metadata, including:
- Anonymized patient identifier
- Time and date of sample collection
- Reference intervals
- Analytical method and instrument identifier
2.2 Data Integrity Checks- Values must be verified against clinically accepted physiological ranges.
- Quality control measures must detect and flag:
- Biological implausibilities
- Sample degradation indicators
- Missing or corrupted values
3. Dual-Level Data Revalidation Framework3.1 Level 1: Preprocessing Validation- Upon data ingestion, all input values are subject to:
- Plausibility screening based on population-level clinical norms
- Intra-patient temporal consistency checks, if historical data are available
- Analytical variance modeling to identify instrument-specific anomalies
3.2 Level 2: Independent AI Cross-Validation- Diagnostic outputs must be verified by a secondary, independently trained AI model.
- A divergence of >5% in predicted classification or risk scoring must:
- Trigger expert system review or
- Flag the case for human clinical oversight
🧠 4. AI System Standards4.1 Model Transparency- Every deployed model must include a model factsheet (“model card”) detailing:
- Data sources and preprocessing pipelines
- Training-validation-test distribution
- Performance metrics and known limitations
- Regulatory certification status (if applicable)
4.2 Explainability Requirements- AI outputs must be accompanied by:
- A ranked list of salient input features (e.g. top contributing biomarkers)
- Confidence intervals or scoring
- Decision traceability via SHAP, LIME, or equivalent explainable AI (XAI) methodology
4.3 Bias Assessment- Models must undergo stratified performance audits across:
- Sex and gender
- Age groups
- Racial/ethnic backgrounds
- Relevant comorbidity clusters
5. Data Security and Regulatory Compliance- All systems must comply with GDPR, HIPAA, or applicable national data protection regulations.
- Data must be stored in encrypted environments, with transmission protected via TLS 1.3 or higher.
- All access and changes must be logged with immutable audit trails, retained for a minimum of 10 years.
6. Clinical Deployment Standards- All diagnostic suggestions must be presented as clinical decision support, not final diagnoses, unless the AI system is certified as a medical device.
- Outputs must be clinically interpretable, using standard terminology and reference values.
- Interoperability must be ensured via HL7 FHIR, LOINC, and SNOMED CT coding where applicable.
7. Continuous Model Surveillance- Deployed AI systems must undergo quarterly performance re-evaluation using real-world data.
- Model drift detection must be automated and tied to alert systems for retraining thresholds.
- Clinician feedback should be captured and integrated into the AI lifecycle for post-market surveillance.
8. Certification and Ecosystem EngagementAIMA Diagnostics proposes these standards for open collaboration and peer review, with the intent of eventual harmonization through international regulatory and standards bodies, including:
- HL7 International
- ISO/TC 215 Health Informatics
- IMDRF (International Medical Device Regulators Forum)
- Local and national health data authorities
Appendix: Key Definitions- Data Revalidation: The process of confirming the accuracy and reliability of data through independent review mechanisms before diagnostic use.
- Dual-AI Architecture: A model validation approach wherein outputs are confirmed via a second independently trained algorithm.
- Model Card: A structured summary describing an AI model’s performance, training context, limitations, and appropriate use cases.
Version 1.2 | July 2025Developed by AIMA Diagnostics, Oslo, Norway