ChatGPT in Medical Librarianship: A Game-Changer in Information Access

(Michelle writing) I was at an meeting last week on the use of AI in science, labs, medicine, librarianship, etc. It was very interesting and I really believe AI will evolve to have a greater role in our professional lives. I was talking with another librarian at the meeting who mentioned my blog. I told him that I haven’t been able to post as much as I would like on it because of other commitments. He suggested in half joking way that I should get ChatGPT to write my posts.

Boom the light bulb went on. What better way to play with ChatGPT or other AI’s than to use them to create and write posts. Please understand I will be very transparent and upfront about when the AI is writing and when I am writing. So this will be my first post having ChatGPT write a post on the use of ChatGPT in librarianship.

(ChatGPT writing)

Medical librarians play a crucial role in the healthcare industry, acting as gatekeepers to a wealth of knowledge. Their job is to provide healthcare professionals and researchers with reliable, up-to-date information. In recent years, the advent of artificial intelligence has transformed the way medical librarians work, and one of the most remarkable AI tools in their arsenal is ChatGPT. Let’s explore how ChatGPT is making waves in the field of medical librarianship.

  1. Efficient Information Retrieval: Searching for medical literature and research papers can be a time-consuming task. With ChatGPT, medical librarians can now swiftly retrieve the most relevant information. Its ability to understand complex medical terminology and provide contextually accurate search results is invaluable.
  2. Answering Clinical Queries: Healthcare professionals frequently have specific clinical questions that demand well-researched answers. ChatGPT can analyze vast databases and scientific literature to provide concise and evidence-based responses. This not only saves time for librarians but also ensures that healthcare providers receive accurate and current information, directly impacting patient care.
  3. Enhanced User Experience: ChatGPT offers a user-friendly interface. Medical professionals and researchers can interact with the AI model conversationally, making it more accessible for users. The ability to understand natural language queries simplifies the search process and allows for a smoother information retrieval experience.
  4. 24/7 Availability: Unlike human librarians, ChatGPT is available around the clock. This means healthcare professionals can access information whenever they need it, even outside regular office hours. The availability of ChatGPT enhances the efficiency and accessibility of information resources.
  5. Patient Education: Medical librarians often play a role in patient education. ChatGPT can generate patient-friendly explanations of medical conditions, treatment options, and preventive measures. This empowers patients to better understand their health and medical choices.
  6. Consistency and Reliability: ChatGPT provides consistent, reliable responses based on the latest medical literature. Librarians can rely on it to ensure that the information they distribute is accurate and up to date.

While ChatGPT offers numerous advantages in medical librarianship, it is essential to be mindful of potential challenges, such as ensuring data privacy, addressing issues of accuracy and bias, and using AI responsibly to complement rather than replace human expertise.

In conclusion, ChatGPT is transforming the landscape of medical librarianship. With its efficiency, 24/7 availability, and user-friendliness, it empowers medical librarians to provide healthcare professionals, researchers, and patients with top-notch, up-to-date information. As AI continues to advance, ChatGPT’s role in medical librarianship is only set to expand, further improving the quality of healthcare and research in the medical field.

(Michelle writing)

My notes:

  1. I used ChatGPT 3.5 for this post. I will use 4.0 in the future too.
  2. I copy pasted the entire thing.
  3. I asked it to write a medium length post and that was seriously too long. So this was its version of a short post.
  4. My prompt was “write a short blog post on the use of ChatGPT in medical librarianship”
  5. It didn’t really list any concerns or possible negative things we need to be aware of.
  6. I need to be more detailed in my post to have it mention both positives and negatives to its use.

I hope to post once a month using ChatGPT or an AI. I am thinking for next month’s post to ask ChatGPT to analyze all of my posts and see what kind of results I get. Since this is an experiment on the use of AI please let me know what you would like me to test and I will see if I can do it because this is the perfect low risk way to do it.

Let me know. 🙂

17 thoughts on “ChatGPT in Medical Librarianship: A Game-Changer in Information Access”

  1. Just had a patron (MD) give me two citations and ask for help finding them as they were having difficulty retrieving them from our library. Turns out this was my first experience (and the patron’s too) of tracking down an AI hallucination. Usually, if a citation has an error, it’s pretty easy to figure out: wrong pub date, fat fingered a volume number. Not this time!

    After an hour of using every tool at my disposal, I finally asked the patron if they had used ChatGPT while searching and when they said yes, I said, “Well, THERE’S your problem!”

  2. Fascinating experiment, Michelle. I, too, am impressed by the flow of the writing. For a minute there, I almost thought I was going to be out of a job. Thank you for sharing this experiment with us on your blog. The idea of using it for writing emails fascinates me. I feel as though it is not uncommon for me to spend way too much time writing an email.

  3. I would not use it right now for that kind of research because it might given an answer but ChatGPT3.5 will not always provide correct references and it is difficult to say where found the information it displayed. Right now I would do that kind of search in PubMed or other appropriate bibliographic databases. Doctors, nurses, medical professionals should not be using ChatGPT3.5 to find literature to answer medical queries. The risk of AI hallucination, lack of transparency regarding where it found the answers, its inability to go behind fire walls for published journal articles, books, etc., and its date limit of 2021 really hamper it for finding that type of information.

  4. ChatGPT3.5 literature search capabilities are really quite subpar. It’s information is only as recent as 2021, it makes up references (although they look extremely legit), and it has difficulties searching databases like PubMed (it can do it but results are not precise or reliable). Finally, it cannot go behind paywalls. So if there was a really great article on topic that was published in 2022 in JAMA it isn’t going to retrieve it….OR it is retrieving information based on the free abstract or a pirated version of the article…. none of which I would want to use for research.

  5. That is a good idea. I would have to do that in the prompt when asking it to write the post. It would also be interesting to see if the links to the examples suffered from AI hallucination like what happens when it lists references or citations. Something to test. Thanks for the idea.

  6. It would not give references because I did not put that in my request. I asked it merely write the post. This is a good example of the importance of your query to ChatGPT (or prompt engineering). You have to be very specific in what you type to get the output that you require. So for it to add references I would have had to tell it to include references. I did not do that out of habit because I already know ChatGPT 3.5 suffers from AI hallucination where it makes up quite a bit of its references. So while the content is correct, it really doesn’t preserve the idea of citing your evidence.

  7. ChatGPT 3.5 currently sucks tremendously at providing good search results for articles on a medical topic. It is rife with hallucinations for the supporting evidence and citations. So I wouldn’t rely on it right now for research. I am currently playing with and testing ChatGPT 4 and will see what that yields.
    HOWEVER…. the concept of using them as an aid in research is sound (as long as you have experts vetting the results and you have good prompt engineers writing queries).
    ChatGPT 3.5 can be extremely helpful writing emails, outlines of reports, etc. I used it to craft an email which required a detailed explanation of a complicated situation. I could have wrote the entire thing but it would have taken me an hour or so going over the email to reword things with less jargon and be simpler. ChatGPT got wrote it then I edited things to be in more my voice and made sure it stayed true to my meaning. Saved me a lot of time actually.

  8. The answer is sort of. ChatGPT 3.5 cannot do this well at all. It’s data only is as recent as 2021 and cannot find things behind paywalls. It also suffers from some severe AI hallucination and makes up a lot of legit looking articles. So you will searching for these fake articles that it says are on the topic. It also is not real awesome at understanding how to search PubMed. Part of that is our queries are formatted more for PubMed not ChatGPT, but also because PubMed wasn’t designed to be searched by an LLM so the two don’t communicate as well as they should to retrieve precise and reliable results.
    I am currently playing with ChatGPT 4 which hallucinates less and has more recent data. It still can’t get behind paywalls obviously. I am not sure though about how it does searching for medical articles/literature as I am still testing it.

  9. I wonder if ChatGPT can gather a list of articles like when we do a literature search. Can you tell the ChatGPT to search for articles on this real question from a physician: Does nutritional rehabilitation improve cardiac function in patients with anorexia nervosa?
    I did the search. I got 17 articles. Let’s see if ChatGPT finds more.

  10. Really appreciate how you incorporated the two posts, yours and ChatGPT. I agree with your concern that it did not offer any negative possibilities to be aware and only briefly mentioned the human element. I look forward to future posts on this as I will have little opportunity where I am to explore this on my own.

  11. Hi Michelle,

    I didn’t realize how “agile” A/I is in responding to a short question such as you posed.

    I see lots of possibilities in incorporating it, particularly specific research query’s.

    Best To You,
    Brian

  12. Michelle, I’d like to see how ChatGPT handles this real question from my patron. Here’s the question: does decriminalization or legalization of marijuana lead to changes in admission patterns for pediatric illnesses commonly associated with marijuana, including cannabinoid hyperemesis syndrome, psychosis, or accidental ingestion?

  13. Michelle, can you ask ChatGPT how to do an efficient literature search? Just general. no particular topic.

  14. I am impressed with ChatGPT’s composition of the blog entry. The flow of ideas is smooth. But it did not give us the references.

  15. Very cool that you tried this out Michelle! Obviously ChatGPT is not very self-critical so it would be nice to know what the response would be if there were negative points as well. “Efficient information retrieval” and “Answering clinical queries” are not things where I would rely on ChatGPT.
    As for the 24/7 availability, there might be more reliable library sources to use there, instead of ChatGPT …

  16. ChatGPT doesn’t care about its own sources unless you specifically ask ChatGPT what sources it uses. Please ask in a prompt.

  17. It would be interesting if it can embed links to examples. The list of pros sounds very impressive, but I want the receipts 😉

Comments are closed.