; Stark, Maria ; Zapf, Antonia ; Päpper, Marc ; Hartmann, Arndt ; Lang, Tobias | Item type: | Article | ||||
|---|---|---|---|---|---|
| Journal or Publication Title: | Modern Pathology | ||||
| Publisher: | ELSEVIER SCIENCE INC | ||||
| Place of Publication: | NEW YORK | ||||
| Volume: | 36 | ||||
| Number of Issue or Book Chapter: | 3 | ||||
| Page Range: | p. 100033 | ||||
| Date: | 2023 | ||||
| Institutions: | Medicine > Lehrstuhl für Pathologie | ||||
| Identification Number: |
| ||||
| Keywords: | INTERNATIONAL KI67; GUIDELINE; PATHOLOGY; digital pathology; mammary carcinoma; surgical pathology | ||||
| Dewey Decimal Classification: | 600 Technology > 610 Medical sciences Medicine | ||||
| Status: | Published | ||||
| Refereed: | Yes, this version has been refereed | ||||
| Created at the University of Regensburg: | Yes | ||||
| Item ID: | 76065 |
Abstract
Image analysis assistance with artificial intelligence (AI) has become one of the great promises over recent years in pathology, with many scientific studies being published each year. Nonetheless, and perhaps surprisingly, only few image AI systems are already in routine clinical use. A major reason for this is the missing validation of the robustness of many AI systems: beyond a narrow context, ...

Abstract
Image analysis assistance with artificial intelligence (AI) has become one of the great promises over recent years in pathology, with many scientific studies being published each year. Nonetheless, and perhaps surprisingly, only few image AI systems are already in routine clinical use. A major reason for this is the missing validation of the robustness of many AI systems: beyond a narrow context, the large variability in digital images due to differences in preanalytical laboratory procedures, staining procedures, and scanners can be challenging for the subsequent image analysis. Resulting faulty AI analysis may bias the pathologist and contribute to incorrect diagnoses and, therefore, may lead to inappropriate therapy or prognosis. In this study, a pretrained AI assistance tool for the quantification of Ki-67, estrogen receptor (ER), and progesterone receptor (PR) in breast cancer was evaluated within a realistic study set representative of clinical routine on a total of 204 slides (72 Ki-67, 66 ER, and 66 PR slides). This represents the cohort with the largest image variance for AI tool evaluation to date, including 3 staining systems, 5 whole-slide scanners, and 1 microscope camera. These routine cases were collected without manual preselection and analyzed by 10 participant pathologists from 8 sites. Agreement rates for individual pathologists were found to be 87.6% for Ki-67 and 89.4% for ER/PR, respectively, between scoring with and without the assistance of the AI tool regarding clinical categories. Individual AI analysis results were confirmed by the majority of pathologists in 95.8% of Ki-67 cases and 93.2% of ER/PR cases. The statistical analysis provides evidence for high interobserver variance between pathologists (Krippendorff's a, 0.69) in conventional immunohistochemical quantification. Pathologist agreement increased slightly when using AI support (Krippendorff a, 0.72). Agreement rates of pathologist scores with and without AI assistance provide evidence for the reliability of immunohistochemical scoring with the support of the investigated AI tool under a large number of environmental variables that influence the quality of the diagnosed tissue images. (c) 2022 THE AUTHORS. Published by Elsevier Inc. on behalf of the United States & Canadian Academy of Pathology.
Metadata last modified: 18 Mar 2025 10:09
Altmetric