High-accuracy AI improves lung cancer detection
Lung cancer has the highest fatality of all cancers. It caused 1.7 million deaths around the world in 2020, which was more than the deaths due to the subsequent three cancers. The high death rate is because lung cancer is hard to detect in the early stages as symptoms do not appear then. Lung cancer has usually metastasized by the time it is detected, making treatment challenging. Advanced-stage lung cancer is also sometimes resistant to chemotherapy.
Stage IV lung cancer groups have a less than 10% survival rate for the five-year phase, while stage IA patients have more than 90%. Therefore, early diagnosis is the key to improving lung cancer prognosis. LDCT (Low Dose CT), PET/CT, and NBI (Narrow Band Imaging) are diagnostic imaging procedures used to detect lung cancer. The complete analysis of these scans is time and effort intensive. A marked degree of reader variability has also been observed, negatively impacting the efficacy of lung cancer screening. Lung cancer is categorised as a non-small cell lung cancer (NSCLC) or a small cell lung cancer, with 87% classified as NSCLC. The features that indicate lung cancer in imaging outputs could be a single tiny nodule, multiple nodules, ground-glass opacity, or pleural effusion. Simple and small lesions that are difficult to detect could also be an indicator. AI can be used in lung cancer treatment throughout the course of treatment, from detection, diagnosis, and therapeutic regimen to prognosis prediction.
AI-based systems developed for lung cancer screening are called computer-aided detection (CAD) systems. They are designed with multiple objectives like lung segmentation, pulmonary nodule detection and classification, and nodule malignancy prediction. AI is proficient in repetitive, large-volume computations and in analysing image-dominant diagnostics. Doctors spend excessive time reading images and pathology slides for diagnosis and reviewing charts to decide on the optimal treatment. The application of AI in LDCT or CXR (Chest X-ray) reading can help radiologists reduce laborious work, minimise reader variability, and improve screening efficiency. The main task for AI applications in image reading is nodule detection and classification/malignancy prediction. AI increases the sensitivity of nodule detection and reduces interpretation time. AI works well as a simultaneous reader or a second reader. Lung nodule classification and malignancy prediction are essential components of nodule detection. Nodules are categorised based on texture as solid, part-solid, or non-solid, and their size. A trained AI model can classify nodules as well as a human expert on differential six textures. This classification is used to calculate the probability of malignancy. AI tools like deep radiomics, which uses convolutional neural networks (CNN), extract features invisible to the eye from a region of interest in a medical image for characterisation or prediction. The basis for using radiomics is gathering information in electronic medical images that extend beyond visual perception and might better reflect tissue properties to improve diagnostic or prognostic accuracy. In scenarios where performing a biopsy is not possible, feature extraction via radiomics could be used to characterise tumour histology.
A study on the influence of AI assistance on image reading by radiologists for lung cancer detection shows that higher sensitivity was achieved with only a slight increase in false positives. The study revealed that only high-accuracy AI improved performance and induced changes in readings, concluding that AI was helpful only when its competence was equivalent to or exceeded that of the human reader. Therefore, the study highlighted the need for high-performance AI tools in clinical settings. Researchers from MIT and the Chang Gung Memorial Hospital have developed an AI tool, Sybil, for lung cancer risk assessment. Sybil analyses the LDCT image data with no human assistance and measures the probability of a patient developing future lung cancer within the next six years. Sybil was first trained on labelled CT lung scans with visible malignant tumours and then trained on scans with no observable indicators of cancer. Sybil scored strong Concordance indices (C-indices) when tested on diverse sets of LDCT scans taken from sources like the National Lung Cancer Screening Trial (NLST).
Machine and deep learning algorithms provide robust mechanisms to analyse vast volumes of image data to uncover underlying complex biological interactions, allowing for accurate diagnosis and personalised treatment. Eventually, it might lead to the widespread use of imaging-based CADs that have the capability to detect and categorise the nature of a lung lesion and its genomic changes and prescribe optimal therapeutic options. Such a tool would considerably reduce the time between diagnosis and treatment and help avoid invasive procedures on fragile patients.
* For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future.