Login

According to a new study, up to 20% of UK doctors could be using AI-generated tools in their practice.

Recent AI has been a little strange. It's been used to create naff images and play Doom. It also built the most lonely social media app. General Practitioners in the UK have been spotted using generative AI, which is both a great and a worrying tool. Up to 20% of them could be using this technology as part of their practices.

ScienceDaily has reported that a paper titled, "Generative Artificial Intelligence in Primary Care: An Online Survey of UK General Practitioners" was recently published by Doctors.net.uk -- an online forum and database for GPs throughout the UK. Of the 1,006 respondents, 205 (20%) said they used "generative AI tools in their clinical practice".

When asked "Have any of these tools helped you in your clinical practice?" ChatGPT was used by 16% of respondents at some point. Bing AI took 5%, Google Bard 4%, and others received a single percent. In the introduction of the paper, ChatGPT was cited as the most popular option.

The results table does not specify how or why the figures are higher than 20%. However, the total number at the bottom of this table indicates that the 26% figure is not the result of GPs checking multiple boxes.

Each response is only one choice. It's important to note that the respondents only said they had used AI tools. They did not say that they used it every day or regularly. The number of GPs in the poll who use AI is a possible figure, not a definitive one.

The polled respondents reported that 29% used it as a tool to create documentation after appointments. 28% used it to suggest a new diagnosis. 25% used it to generate treatment options. 20% used it to summarize and make timelines based on prior documentation. 8% used it to write letters. 33% used it for another purpose.

As chatbots are programmed to scrape huge amounts of data off the internet, there is a concern that the data generated by generative chatbots could be distorted by misinformation or bias. As the paper points:

"They (chatbots), are prone to generating incorrect information. These models may also perpetuate or worsen racial, disability and gender inequalities in healthcare. These tools are consumer-based and can compromise patient privacy.

The authors of the paper also praise AI's potential when it comes to writing documents, dictating conversation, and even helping with diagnosis. However, it should never be used as a diagnostic tool by itself. Patients may have unusual symptoms or speak in a way that is difficult for a bot or translator to understand. Language is a powerful tool, but it's also complex and nuanced.

The paper states: "The medical profession will need to find a way to educate both physicians and trainees on the potential benefits but also the risks of these tools when summarizing information."

Interesting news

Comments

Выбрано: []
No comments have been posted yet