Skip to main content Skip to footer

Generative AI

Applications of large language models (LLMs) in healthcare

People being greeted by ubiquitous chatbots engaging in conversations is the norm in today’s age with the widespread use of artificial intelligence (AI). The world’s first chatbot ELIZA shocked and surprised its creator, MIT researcher Joseph Weizenbaum, in the late 1960s. Weizenbaum had created ELIZA to simulate a conversation between users and a psychotherapist. He was in for a shock when he realised that users were sharing heartily with what they thought was an empathetic psychotherapist. The unexpected results of this experimentation turned him into a critic of AI eventually. He became wary of giving computers the capacity to make choices that humans could.

The algorithms that allow computers to understand the structure and generate human languages are called Large Language Models (LLM) and they can perform a variety of NLP (Natural Language Processing) tasks.  The most famed example is ChatGPT (Chat Generative Pre-trained Transformers). By training in billions of datasets or parameters, it trains or semi-trains the algorithm to predict the next text or word. Though the idea may seem simple, it can empower computers to work on humongous data and processing abilities.  ELIZA was a precursor to the current-day LLMs. The simulated conversation that ELIZA had with the user in the 1950s has now expanded to almost all sectors. A round-up of a few its applications of large language models in healthcare is provided below.

Medical transcription: Medical note-taking evolved from papyrus to paper to typewriter to computers. Audio recordings allowed the medical transcription industry to outsource it as a service and it grew steadily to a value of USD 2.6 billion in 2022 to a projected market value of USD 3.79 billion by 2029. The use of NLP has allowed manual transcriptions to be captured as automatic dictations which are more efficient. The industry did have apprehensions that it may render many roles obsolete and lead to unemployment in that sector, but the continued need for skilled professionals to maintain the quality of computer-generated reports allayed those fears.

Virtual Nursing Assistants – A pre-trained AI-powered virtual medical assistant can cover most mundane but mandatory tasks in healthcare processes ranging from scheduling appointments, providing basic information, answering queries to gathering patient information on symptoms and a lot more. The pros of employing AI in such tasks are the ease and accuracy of data collection, reduction in waiting time, scalability, real-time interaction, and reduction in costs.

Drug discovery – The ability of LLMs to look at enormous unstructured data and electronic health records (EHR) and derive patterns and insights from them has helped accelerate the pipeline flow during various steps in drug discovery. One instance is of a company called Nvidia which used an LLM MegaMolBART to speed up drug discovery.

Personalised Medicine – An AI-powered chatbot can offer personalised advice based on a patient’s specific medical data, preferences, lifestyle, medical history, etc. A company named Graphable used LLMs to extract insights from clinical data and used them for better patient journey mapping and targeted treatments.

Medical image analysis – Radiology is another area in healthcare where the usage of LLMs is being explored for extraction from radiology images. Information from the images is critical for diagnosis & decision making in treatments. Pre-trained BERT (Bidirectional Encoder Representations from Transformers) based models have been used to automatically generate radiology reports from annotated datasets.

Ethical Considerations of Healthcare Large Language Models

In conclusion, data privacy, data accountability and security, robust regulatory framework and bias-free datasets, and ethical implications among other concerns must be addressed in detail before we can use them in clinical applications. The initial apprehensions that led Weizenbaum to become a critic of AI are still relevant. He cautioned that “man should not rely too much on technology to escape the burden of acting as an independent agent.” He argued that it could be dangerous to let only computers confront genuine human problems. Weizenbaum's foresight urges us to acknowledge the importance of human judgement in navigating complex issues, especially within sensitive domains like healthcare. As AI continues to evolve, its integration must align with ethical considerations and the preservation of human autonomy. Emphasising a collaborative approach between AI and human expertise will not only mitigate risks but also foster a more inclusive and ethically sound landscape for leveraging these technologies in clinical applications. Balancing innovation with ethical prudence remains pivotal to harnessing AI's potential without relinquishing our roles as conscientious decision-makers.


Recent Posts