+
Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

News & Comment

Filter By:

  • Chen et al. demonstrate that large language models (LLMs) frequently prioritize agreement over accuracy when responding to illogical medical prompts, a behavior known as sycophancy. By reinforcing user assumptions, this tendency may amplify misinformation and bias in clinical contexts. The authors find that simple prompting strategies and LLM fine-tuning can markedly reduce sycophancy without impairing performance, highlighting a path toward safer, more trustworthy applications of LLMs in medicine.

    • Kyra L. Rosen
    • Margaret Sui
    • Joseph C. Kvedar
    EditorialOpen Access
  • Biased and poorly documented dermatology datasets pose risks to the development of safe and generalizable artificial intelligence (AI) tools. We created a Dataset Nutrition Label (DNL) for multiple dermatology datasets to support transparent and responsible data use. The DNL offers a structured, digestible summary of key attributes, including metadata, limitations, and risks, enabling data users to better assess suitability and proactively address potential sources of bias in datasets.

    • Yingjoy Li
    • Matthew Taylor
    • Veronica Rotemberg
    CommentOpen Access
  • The npj Digital Medicine Editorial Fellowship (https://www.nature.com/npjdigitalmed/editorial-fellowship) is a year-long program that provides trainees and early career researchers with direct exposure to peer review, editorial writing, and journal operations with npj Digital Medicine. Since 2021, the program has graduated 4 fellows, who remain active with the journal as reviewers, editorial board members, and guest editors. As the 2024–25 Editorial Fellow, I discuss the fellowship’s structure, outcomes, and learning experiences.

    • Ben Li
    CommentOpen Access
  • In “A Randomized Controlled Trial of Mobile Intervention Using Health Support Bubbles to Prevent Social Frailty”, Hayashi et al. investigated the effects of using a mobile health app with family or individually. Greater improvements in social behavior and frailty were noted in participants who used the app with family. In an era of remote healthcare and app-based health interventions, Hayashi et al.’s study reminds of the importance of human connection.

    • Elizabeth J. Enichen
    • Kimia Heydari
    • Joseph C. Kvedar
    EditorialOpen Access
  • Despite its rapid advancement, digital health has little considered issues of climate change or environmental degradation. As the digital health community begin to engage with this critical issue scholars have started mapping progression in the field, typically focusing on the relationship between digital health as it applies to climate and/or environmental mitigation or climate adaptation. In this Comment, we argue that climate and environment learning for mitigation and adaptation constitutes a critical yet overlooked dimension intersecting mitigation and adaptation strategies, warranting deliberate attention. This learning category is the systematic and transparent approach that applies structured and replicable methods to identify, appraise, and make use of evidence from data analytics across decision-making processes related to mitigation and adaptation, including for implementation, and informs the exchange of new best practices in a post-climate era. The WHO’s Digital Health Classification framework offers a good option for ultimately formalising learning into practice. As a foundational step, however, learning needs to be conceptualised and developed into its own research agenda, organised around a shared language of metrics and evidence. We call on actors in the digital health field to develop this concrete strategy and initiate this process.

    • Maeghan Orton
    • Gabrielle Samuel
    • Peter Drury
    CommentOpen Access
  • Artificial intelligence (AI) is transforming traditional medicine, particularly in radiology. Its integration across patient care stages has made it increasingly ubiquitous. The European Union’s (EU) AI Act will additionally regulate AI-enabled solutions within the EU. However, without standardized guidelines, the Act’s flexibility poses practical challenges for providers and deployers, leading to inconsistencies in meeting requirements for high-risk systems like radiology AI, potentially impacting patients’ fundamental rights and safety.

    • Jaka Potočnik
    • Damjan Fujs
    CommentOpen Access
  • While large language models (LLMs) hold promise for transforming clinical healthcare, current comparisons and benchmark evaluations of large language models in medicine often fail to capture real-world efficacy. Specifically, we highlight how key discrepancies arising from choices of data, tasks, and metrics can limit meaningful assessment of translational impact and cause misleading conclusions. Therefore, we advocate for rigorous, context-aware evaluations and experimental transparency across both research and deployment.

    • Monica Agrawal
    • Irene Y. Chen
    • Shalmali Joshi
    CommentOpen Access
  • Artificial intelligence (AI) scribes have been rapidly adopted across health systems, driven by their promise to ease the documentation burden and reduce clinician burnout. While early evidence shows efficiency gains, this commentary cautions that adoption is outpacing validation and oversight. Without greater scrutiny, the rush to deploy AI scribes may compromise patient safety, clinical integrity, and provider autonomy.

    • Maxim Topaz
    • Laura Maria Peltonen
    • Zhihong Zhang
    CommentOpen Access
  • The rise of biomedical foundation models creates new hurdles in model testing and authorization, given their broad capabilities and susceptibility to complex distribution shifts. We suggest tailoring robustness tests according to task-dependent priorities and propose to integrate granular notions of robustness in a predefined specification to guide implementation. Our approach facilitates the standardization of robustness assessments in the model lifecycle and connects abstract AI regulatory frameworks with concrete testing procedures.

    • R. Patrick Xian
    • Noah R. Baker
    • Reza Abbasi-Asl
    CommentOpen Access
  • Integrating large language models (LLMs) into oncology holds promise for clinical decision support. Woollie is an LLM recently developed by Zhu et al., fine-tuned using radiology impression notes from Memorial Sloan Kettering Cancer Center and externally validated on UCSF oncology datasets. This methodology prioritizes data accuracy, preempts catastrophic forgetting, and demonstrates unparalleled rigor in predicting the progression of various cancer types. This work establishes a foundation for reliable, scalable, and equitable applications of LLMs in oncology.

    • Kimia Heydari
    • Elizabeth J. Enichen
    • Joseph C. Kvedar
    EditorialOpen Access
  • Foundation models are rapidly integrated into medicine, offering opportunities and ethical challenges. Unlike traditional medical technologies, they often enter real-world use without rigorous testing or oversight. We argue that their use constitutes a social experiment. This perspective highlights the unpredictable and partly uncontrollable nature of foundation models. We propose an ethical framework to guide responsible implementation, focusing on conditions for responsible experimentation rather than unattainable full predictability.

    • Robert Ranisch
    • Joschka Haltaufderheide
    CommentOpen Access
  • The use of synthetic data to augment real-world data in healthcare can ensure AI models perform more accurately, and fairly across subgroups. By examining a parallel case study of NHS England’s care.data platform, this paper explores why care.data failed and offers recommendations for future synthetic data initiatives centring on confidentiality, consent and transparency as key areas of focus needed to encourage successful adoption.

    • Sahar Abdulrahman
    • Markus Trengove
    CommentOpen Access
  • Return-to-work (RTW) after long-term absence due to ill health (or other factors) can be fraught with psychological, physical, and organisational challenges which may require continuous management to ensure successful employee reintegration. While digital interventions have emerged to support reintegration, a recent systematic review revealed that few explicitly address RTW needs, despite growing interest in e-mental health. Early online interventions demonstrate promise in improving psychological outcomes, yet face limitations in scalability, personalisation, and integration into workplace systems. Smartphone-based interventions via applications/apps offer a scalable alternative, leveraging ubiquitous technology to deliver support beyond bespoke settings through self-monitoring, continuous learning, and communication tools. However, existing RTW-focused apps remain narrowly tailored to specific conditions, with limited adaptation to individual needs and insufficient evaluation of long-term effectiveness. Future developments must prioritise personalisation, rigorous evaluation in diverse populations, and integration within occupational health and real-world employer systems with organisational support. Addressing these gaps is essential to fully realise the potential of digital solutions in supporting sustainable work reintegration that is respectful and compassionate.

    • Conor Wall
    • Andrej Kohont
    • Alan Godfrey
    EditorialOpen Access
  • Large language models (LLMs), such as ChatGPT-o1, display subtle blind spots in complex reasoning tasks. We illustrate these pitfalls with lateral thinking puzzles and medical ethics scenarios. Our observations indicate that patterns in training data may contribute to cognitive biases, limiting the models’ ability to navigate nuanced ethical situations. Recognizing these tendencies is crucial for responsible AI deployment in clinical contexts.

    • Shelly Soffer
    • Vera Sorin
    • Eyal Klang
    CommentOpen Access
  • Artificial intelligence (AI) has primarily enhanced individual primary care visits, yet its potential for population health management remains untapped. Effective AI should integrate longitudinal patient data, automate proactive outreach, and mitigate disparities by addressing barriers such as transportation and language. Properly deployed, AI can significantly reduce administrative burden, facilitate early intervention, and improve equity in primary care, necessitating rigorous evaluation and adaptive design to realize sustained population-level benefits.

    • Sanjay Basu
    • Pablo Bermudez-Canete
    • Pranav Rajpurkar
    CommentOpen Access
  • Generative artificial intelligence can fulfil the criteria to be the ‘more knowledgeable other’ in a social constructivist framework. By scaffolding learning and providing a unique and augmented zone of proximal development for learners, it can simulate social interactions and contribute to the human-AI co-construction of knowledge. The presence of generative artificial intelligence in medical education prompts a re-imagining and re-interpretation of traditional roles within established pedagogy.

    • Michael Tran
    • Chinthaka Balasooriya
    • Joel Rhee
    CommentOpen Access
  • Wu et al.’s recent article, “Noninvasive early prediction of preeclampsia in pregnancy using retinal vascular features,” documents significant differences in retinal vascular features among women who develop preeclampsia and those with normotensive pregnancies. These findings provide evidence that retinal screening has the potential to be used as a low-cost, non-invasive screening strategy to support the earlier detection, prevention, and treatment of preeclampsia.

    • Kimia Heydari
    • Elizabeth J. Enichen
    • Joseph C. Kvedar
    EditorialOpen Access
  • Systemic integration and equitable adoption of Digital Health Technologies (DHTs) require timely, comprehensive, harmonised policies. This paper presents five complementary key enablers: defining DHTs in the scope of fit-for-purpose policy interventions, implementing AI-ready regulatory approaches, adopting dynamic assessment criteria, establishing dedicated reimbursement models, and promoting evidence generation, clinical guidelines, interoperability, and education. Cross-border and multistakeholder collaboration are also crucial to reducing fragmentation, addressing inequities, and driving scalable, systemic value.

    • Alberta M. C. Spreafico
    • Rosanna Tarricone
    • Ariel D. Stern
    CommentOpen Access
  • Predictive techniques in medical imaging offer transformative opportunities for early diagnosis and personalized care but raise complex ethical, legal, and economic challenges. This paper explores current advancements, regulatory implications, and risks such as overdiagnosis, alert fatigue, and information overload, emphasizing the urgent need for multidisciplinary frameworks to ensure responsible implementation and sustainable integration into healthcare systems.

    • Luca Saba
    • Ernesto d’Aloja
    CommentOpen Access
  • Regulatory agencies, such as the European Commission and the U.S. Food and Drug Administration, are now permitting electronic instructions for use (eIFUs) to be distributed alongside paper instructions for use (IFUs) for medical devices. However, challenges remain regarding the implementation of eIFUs to replace paper IFUs in the era of digital health. Our work examines regulatory, consumer, and environmental factors that influence the transition from paper-based IFUs to eIFUs for wearable diabetes devices.

    • Cindy N. Ho
    • Alessandra T. Ayers
    • David C. Klonoff
    CommentOpen Access

Search

Quick links

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载