新西兰卫生部门员工被告知停止使用ChatGPT撰写临床记录。
Health NZ staff told to stop using ChatGPT to write clinical notes

原始链接: https://www.rnz.co.nz/news/national/590645/health-nz-staff-told-to-stop-using-chatgpt-to-write-clinical-notes

新西兰健康(HNZ)已警告员工不要使用ChatGPT和Gemini等免费人工智能工具起草临床记录,理由是存在严重的数据安全、隐私和责任风险。最近一份致罗托鲁瓦精神健康和戒瘾服务的备忘录详细说明,已检测到未经授权使用人工智能的情况,并可能导致纪律处分。 HNZ的政策要求所有用于临床目的的人工智能工具必须注册并获得批准。虽然HNZ正在推广一款批准的人工智能书写工具(“Heidi”),但即使使用匿名化的患者数据,也严格禁止使用免费替代品。 公共服务协会认为,员工诉诸这些工具是由于“巨大压力”和人手不足造成的,批评HNZ的威胁性做法,并呼吁投资于培训和批准的资源,而不是纪律威胁。HNZ拒绝就事件数量或由此产生的纪律处分发表评论。

## 新西兰医疗领域人工智能的警示故事 新西兰医疗保健工作人员被建议停止使用如ChatGPT等公共人工智能工具进行临床笔记记录,原因是存在准确性和隐私问题。尽管医疗系统正在*推广*人工智能转录和笔记解决方案,但人们对数据安全性的担忧日益增加——患者数据可能被发送到海外(例如,发送到澳大利亚的Azure),尽管有保证数据会保留在本地。 医疗专业人士报告说,人工智能生成笔记存在问题,包括“幻觉”(捏造的信息)和过多的无关细节,反而增加了工作量,而非减少。如果人工智能错误导致患者受到伤害,人们担心责任归属问题。 核心问题似乎缺乏健全的验证流程,以及对人工智能输出的依赖,而没有进行批判性审查。一些人认为人工智能在医学领域有其用武之地,例如进行总结,但前提是谨慎实施和人工监督。担忧还延伸到更广泛的隐私影响,特别是考虑到新西兰最近发生的数据泄露事件,以及人工智能可能造成“人工智能锁定”的风险,需要进一步的人工智能工具来解读生成的数据。最终,这场争论凸显了在人工智能医疗领域快速发展背景下,效率提升与患者安全之间的紧张关系。
相关文章

原文
The image shows three hospital beds flanked by drawn curtains. There is a light green wash over the image.

Photo: RNZ

Health NZ (HNZ) says staff have been caught using free AI tools like ChatGPT and Gemini to write clinical notes, a move it says could result in formal disciplinary action.

A memo seen by RNZ was sent this week from a senior manager to all Mental Health and Addiction Services staff in the Rotorua Lakes district, reminding them not to use tools like ChatGPT, Claude or Gemini in their work.

"It has come to my attention that there has been instances where it appears that AI (artificial intelligence) drafting tools have been used to prepare clinical notes," it says.

"The use of free AI tools (e.g. ChatGPT, Claude, Gemini) for clinical purposes is strictly prohibited due to data security, privacy and accountability concerns. You are also not allowed to use AI tools to draft notes and then transcribing it to handwritten or typed notes, even if you anonymise the patient information."

Doing so could result in formal disciplinary action, it said.

According to the HNZ-wide AI policy, any AI tools must be registered with the Health NZ National Artificial Intelligence and Algorithm Expert

Advisory Group (NAIAEAG) - this would include Heidi, an AI scribe tool being rolled out across EDs.

Sonny Taite, HNZ director of digital innovation and AI, said free AI tools presented risks to data security, privacy and accountability, and "any possible exemptions are assessed case by case".

"As with any new process in healthcare, we are working with our clinicians on new ways of working and this is an ongoing process."

HNZ did not answer questions about how many instances there had been of staff using unapproved AI software, or whether anyone had been disciplined.

Staff turning to AI tools under 'enormous pressure' - union

Fleur Fitzsimons, national secretary for the Public Service Association, which represents many health and addiction service workers, said clinical staff were turning to AI tools because of the "enormous pressure" they were under.

A memo which opened by threatening formal disciplinary action was the wrong approach, she said.

"It's a warning shot that will make staff afraid to ask questions or seek help."

HNZ should be investing in proper training and approved tools, she said.

"Let's not forget that HNZ has been cutting the very teams responsible for digital systems and IT support. If staff are improvising with free tools, HNZ needs to examine why that is the case, not simply threatening staff with a breach of the Code of Conduct."

Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

联系我们 contact @ memedata.com