Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
A recent study highlights how data changes not only how we can assess the performance of legal firms in the US, but more broadly how computational science is expanding beyond its traditional scope and into the legal field.
A recent study proposes using a single neural network to model and compute a wide range of solid-state materials, demonstrating exceptional transferability and substantially reduced computational costs — a breakthrough that could accelerate the design of next-generation materials in applications from efficient solar cells to room-temperature superconductors.
The Large Perturbation Model (LPM) is a computational deep learning framework that predicts gene expression responses to chemical and genetic perturbations across diverse contexts. By modeling perturbation, readout, and context jointly, LPM enables in silico hypothesis generation and drug repurposing.
The recent computational model ‘BRyBI’ proposes that gamma, theta, and delta neural oscillations can guide the process of word recognition by providing temporal windows for the integration of bottom-up input with top-down information.
Integrating computational methods with brain-based data presents a path to precision psychiatry by capturing individual neurobiological variation, improving diagnosis, prognosis, and personalized care. This Viewpoint highlights advances in normative and foundation models, the importance of clinically grounded principles, and the role of robust measurement and interpretability in progressing mental health care.
Large language models (LLMs) offer promising ways to enhance psychotherapy through greater accessibility, personalization and engagement. This Perspective introduces a typology that categorizes the roles of LLMs in psychotherapy along two critical dimensions: autonomy and emotional engagement.
In this Perspective, the authors examine privacy risks in mental health AI, and explore solutions and evaluation frameworks to balance privacy–utility trade-offs. They suggest a pipeline for developing privacy-aware mental health AI systems.
Rapid identification of pathogenic viruses remains a critical challenge. A recent study advances this frontier by demonstrating a fully integrated memristor-based hardware system that accelerates genomic analysis by a factor of 51, while reducing energy consumption to just 0.2% of that required by conventional computational methods.
We propose a computationally efficient genome-wide association study (GWAS) method, WtCoxG, for time-to-event (TTE) traits in the presence of case ascertainment— a form of oversampling bias. WtCoxG addresses case ascertainment bias by applying a weighted Cox proportional hazard model, and outperforms existing approaches when incorporating information on external allele frequencies.
This Perspective discusses that generative AI aligns with generative linguistics by showing that neural language models (NLMs) are formal generative models. Furthermore, generative linguistics offers a framework for evaluating and improving NLMs.
A benchmark — MaCBench — is developed for evaluating the scientific knowledge of vision language models (VLMs). Evaluation of leading VLMs reveals that they excel at basic scientific tasks such as equipment identification, but struggle with spatial reasoning and multistep analysis — a limitation for autonomous scientific discovery.
A recent study demonstrates the potential of using in-memory computing architecture for implementing large language models for an improved computational efficiency in both time and energy while maintaining a high accuracy.
Large language models remain largely unexplored is the design of cities. In this Perspective, the authors discuss the potential opportunities brought by these models in assisting urban planning.
An integrated platform, Digital Twin for Chemical Science (DTCS), is developed to connect first-principles theory with spectroscopic measurements through a bidirectional feedback loop. By predicting and refining chemical reaction mechanisms before, during and after experiments, DTCS enables the interpretation of spectra and supports real-time decision-making in chemical characterization.
Large language models are increasingly important in social science research. The authors provide guidance on how best to validate and use these models as rigorous tools to further scientific inference.
A recent study proposed ZeoBind, an AI-accelerated workflow enabling the discovery and experimental verification of hits within chemical spaces containing hundreds of millions of zeolites.
A recent study sought to replicate published experimental research using large language models, finding that human behavior is replicated surprisingly well overall, but deviates in important ways that could lead social scientists astray.
A recent study provides intuition and guidelines for deciding whether to incorporate cheaper, lower-fidelity experiments into a closed-loop search for molecules and materials with desired properties.
An artificial neural network-based strategy is developed to learn committor-consistent transition pathways, providing insight into rare events in biomolecular systems.