Survey on GenAI in Healthcare Finds Practitioners Prefer Industry- and Task-Specific...

对医疗保健行业GenAI的调查发现,从业者更喜欢特定行业和特定任务...

2024-04-23 14:26 multilingual

本文共631个字,阅读需7分钟

阅读模式 切换至中文

First Generative AI in Healthcare Survey Uncovers Trends, Challenges, and Best Practices in Generative AI among Healthcare and Life Sciences Practitioners LEWES, Del., April 23, 2024 — John Snow Labs, the AI for healthcare company, today announced the findings of the inaugural Generative AI in Healthcare Survey. Conducted by Gradient Flow, the research explores the trends, tools, and behaviors around generative artificial intelligence (GenAI) use among healthcare and life sciences practitioners. Findings showed a significant increase in GenAI budgets across the board, with one-fifth of all technical leaders witnessing a more than 300% budget growth, reflecting strong advocacy and investment. The survey highlights a number of key priorities of practitioners unique to the healthcare industry. A strong preference for healthcare-specific models was a key criterion when evaluating large language models (LLMs). Requiring models to be tuned specifically for healthcare (4.03 mean response) was of higher importance than reproducibility (3.91), legal and reputation risk (3.89), explainability and transparency (3.83), and cost (3.8). Accuracy is the top priority when evaluating LLMs and lack of accuracy is considered the top risk in GenAI projects. Another key finding is a strong preference for small, task-specific language models. These targeted models are optimized for specific use cases, unlike general-purpose LLMs. Survey results reflected this, with 36% of respondents using healthcare-specific task-specific language models. Open-source LLMs (24%) and open-source task-specific models (21%) follow behind. Proprietary LLMs are less commonly used, whether through a SaaS API (18%) or on-premise (7%). In terms of how models are tested and improved, the survey highlights one practice that addresses both the accuracy and compliance concerns of the healthcare industry: human-in-the-loop workflows. This was by far the most common step taken to test and improve LLMs (55%), followed by supervised fine-tuning (32%), and interpretability tools and techniques (25%). A human-in-the-loop approach enables data scientists and domain experts to easily collaborate on training, testing, and fine-tuning models to their exact needs, improving them over time with feedback. “Healthcare practitioners are already investing heavily in GenAI, but while budgets may not be a top concern, it’s clear that accuracy, privacy, and healthcare domain expertise are all critical,” said David Talby, CTO, John Snow Labs. “The survey results shine the light on the importance of healthcare-specific, task-specific language models, along with human-in-the-loop workflows as important techniques to enable the accurate, compliant, and responsible use of the technology.” Finally, the survey explores the large amount of remaining work in applying responsible AI principles in healthcare GenAI projects. Lack of accuracy (3.78) and legal and reputational risk (3.62) were reported as the most concerning roadblocks. Worse, a majority of GenAI projects have not yet been tested for any LLM requirements cited. For those that have, fairness (32%), explainability (27%), private data leakage (27%), hallucinations (26%), and bias (26%) ranked as the most commonly tested. This suggests that no aspect of responsible AI is being tested by more than a third of organizations. An upcoming webinar taking place at 2pm ET on April 30 with Drs. Ben Lorica of Gradient Flow and David Talby of John Snow Labs, provides additional details and analysis of the survey results and the current state of GenAI in healthcare. About John Snow Labs John Snow Labs, the AI for healthcare company, provides state-of-the-art software, models, and data to help healthcare and life science organizations put AI to good use. Developer of Spark NLP, Healthcare NLP, the Healthcare GPT LLM, the Generative AI Lab No-Code Platform, and the Medical Chatbot, John Snow Labs’ award-winning medical AI software powers the world’s leading pharmaceuticals, academic medical centers, and health technology companies. Creator and host of The NLP Summit, the company is committed to further educating and advancing the global AI community. Contact Gina Devine John Snow Labs gina@johnsnowlabs.com
首次医疗保健中的生成人工智能调查揭示了医疗保健和生命科学从业者在生成人工智能方面的趋势、挑战和最佳实践 特拉华州刘易斯。,2024年4月23日——医疗保健人工智能公司约翰·斯诺实验室(John Snow Labs)今天宣布了首届医疗保健中的生成式人工智能调查的结果。该研究由Gradient Flow进行,探索了医疗保健和生命科学从业者使用生成式人工智能(GenAI)的趋势、工具和行为。调查结果显示,GenAI的预算全面大幅增长,五分之一的技术领导者见证了超过300%的预算增长,反映了强大的宣传和投资。 该调查强调了医疗保健行业特有的从业者的一些关键优先事项。在评估大型语言模型(LLMs)时,对医疗保健特定模型的强烈偏好是一个关键标准。要求模型专门针对医疗保健进行调整(4.03平均响应)比再现性(3.91)、法律和声誉风险(3.89)、可解释性和透明度(3.83)以及成本(3.8)更重要。评估LLMs时,准确性是重中之重,缺乏准确性被认为是GenAI项目的最大风险。 另一个关键发现是对小型、特定于任务的语言模型的强烈偏好。与通用的LLM不同,这些目标模型针对特定的用例进行了优化。调查结果反映了这一点,36%的受访者使用医疗保健特定任务特定语言模型。开源LLMs(24%)和开源特定任务模型(21%)紧随其后。无论是通过SaaS API(18%)还是内部部署(7%),专有LLMs都不太常用。 就如何测试和改进模型而言,该调查强调了一种解决医疗保健行业准确性和合规性问题的实践:人在回路工作流。这是迄今为止测试和改进LLMs最常见的步骤(55%),其次是监督微调(32%)和可解释性工具和技术(25%)。人在回路的方法使数据科学家和领域专家能够轻松地在训练、测试和微调模型以满足他们的确切需求方面进行协作,并随着时间的推移通过反馈来改进它们。 John Snow Labs首席技术官David Talby表示:“医疗从业者已经在GenAI进行了大量投资,但尽管预算可能不是首要问题,但很明显,准确性、隐私和医疗保健领域的专业知识都至关重要。”“调查结果揭示了特定于医疗保健、特定于任务的语言模型以及人在回路工作流的重要性,这些都是实现准确、合规和负责任地使用该技术的重要技术。” 最后,该调查探讨了在医疗保健GenAI项目中应用负责任的人工智能原则的大量剩余工作。据报道,缺乏准确性(3.78)以及法律和声誉风险(3.62)是最令人担忧的障碍。更糟糕的是,大多数GenAI项目还没有经过任何LLM要求的测试。对于那些有,公平性(32%),可解释性(27%),私人数据泄露(27%),幻觉(26%)和偏见(26%)被列为最常测试的。这表明,超过三分之一的组织正在测试负责任的人工智能的任何方面。 即将于美国东部时间4月30日下午2点与Gradient Flow的Ben Lorica博士和John Snow Labs的David Talby博士举行的网络研讨会提供了调查结果的更多细节和分析,以及GenAI在医疗保健领域的现状。 关于约翰·斯诺实验室 医疗保健人工智能公司约翰·斯诺实验室(John Snow Labs)提供最先进的软件、模型和数据,帮助医疗保健和生命科学组织很好地利用人工智能。约翰·斯诺实验室屡获殊荣的医疗人工智能软件是Spark NLP、医疗保健NLP、医疗保健GPT LLM、生成式人工智能实验室无代码平台和医疗聊天机器人的开发商,为世界领先的制药公司、学术医疗中心和医疗技术公司提供支持。作为NLP峰会的创始人和主持人,该公司致力于进一步教育和推进全球人工智能社区。 接触 吉娜·迪瓦恩 约翰·斯诺实验室 gina@johnsnowlabs.com

以上中文文本为机器翻译,存在不同程度偏差和错误,请理解并参考英文原文阅读。

阅读原文