-
ARIMA_PLUS: Large-scale, Accurate, Automatic and Interpretable In-Database Time Series Forecasting and Anomaly Detection in Google BigQuery
Authors:
Xi Cheng,
Weijie Shen,
Haoming Chen,
Chaoyi Shen,
Jean Ortega,
Jiashang Liu,
Steve Thomas,
Honglin Zheng,
Haoyun Wu,
Yuxiang Li,
Casey Lichtendahl,
Jenny Ortiz,
Gang Liu,
Haiyang Qi,
Omid Fatemieh,
Chris Fry,
Jing Jing Long
Abstract:
Time series forecasting and anomaly detection are common tasks for practitioners in industries such as retail, manufacturing, advertising and energy. Two unique challenges stand out: (1) efficiently and accurately forecasting time series or detecting anomalies in large volumes automatically; and (2) ensuring interpretability of results to effectively incorporate business insights. We present ARIMA…
▽ More
Time series forecasting and anomaly detection are common tasks for practitioners in industries such as retail, manufacturing, advertising and energy. Two unique challenges stand out: (1) efficiently and accurately forecasting time series or detecting anomalies in large volumes automatically; and (2) ensuring interpretability of results to effectively incorporate business insights. We present ARIMA_PLUS, a novel framework to overcome these two challenges by a unique combination of (a) accurate and interpretable time series models and (b) scalable and fully managed system infrastructure. The model has a sequential and modular structure to handle different components of the time series, including holiday effects, seasonality, trend, and anomalies, which enables high interpretability of the results. Novel enhancements are made to each module, and a unified framework is established to address both forecasting and anomaly detection tasks simultaneously. In terms of accuracy, its comprehensive benchmark on the 42 public datasets in the Monash forecasting repository shows superior performance over not only well-established statistical alternatives (such as ETS, ARIMA, TBATS, Prophet) but also newer neural network models (such as DeepAR, N-BEATS, PatchTST, TimeMixer). In terms of infrastructure, it is directly built into the query engine of BigQuery in Google Cloud. It uses a simple SQL interface and automates tedious technicalities such as data cleaning and model selection. It automatically scales with managed cloud computational and storage resources, making it possible to forecast 100 million time series using only 1.5 hours with a throughput of more than 18000 time series per second. In terms of interpretability, we present several case studies to demonstrate time series insights it generates and customizability it offers.
△ Less
Submitted 28 October, 2025;
originally announced October 2025.
-
Fair Representation Learning with Controllable High Confidence Guarantees via Adversarial Inference
Authors:
Yuhong Luo,
Austin Hoag,
Xintong Wang,
Philip S. Thomas,
Przemyslaw A. Grabowicz
Abstract:
Representation learning is increasingly applied to generate representations that generalize well across multiple downstream tasks. Ensuring fairness guarantees in representation learning is crucial to prevent unfairness toward specific demographic groups in downstream tasks. In this work, we formally introduce the task of learning representations that achieve high-confidence fairness. We aim to gu…
▽ More
Representation learning is increasingly applied to generate representations that generalize well across multiple downstream tasks. Ensuring fairness guarantees in representation learning is crucial to prevent unfairness toward specific demographic groups in downstream tasks. In this work, we formally introduce the task of learning representations that achieve high-confidence fairness. We aim to guarantee that demographic disparity in every downstream prediction remains bounded by a *user-defined* error threshold $ε$, with *controllable* high probability. To this end, we propose the ***F**air **R**epresentation learning with high-confidence **G**uarantees (FRG)* framework, which provides these high-confidence fairness guarantees by leveraging an optimized adversarial model. We empirically evaluate FRG on three real-world datasets, comparing its performance to six state-of-the-art fair representation learning methods. Our results demonstrate that FRG consistently bounds unfairness across a range of downstream models and tasks.
△ Less
Submitted 23 October, 2025;
originally announced October 2025.
-
eDCF: Estimating Intrinsic Dimension using Local Connectivity
Authors:
Dhruv Gupta,
Aditya Nagarsekar,
Vraj Shah,
Sujith Thomas
Abstract:
Modern datasets often contain high-dimensional features exhibiting complex dependencies. To effectively analyze such data, dimensionality reduction methods rely on estimating the dataset's intrinsic dimension (id) as a measure of its underlying complexity. However, estimating id is challenging due to its dependence on scale: at very fine scales, noise inflates id estimates, while at coarser scales…
▽ More
Modern datasets often contain high-dimensional features exhibiting complex dependencies. To effectively analyze such data, dimensionality reduction methods rely on estimating the dataset's intrinsic dimension (id) as a measure of its underlying complexity. However, estimating id is challenging due to its dependence on scale: at very fine scales, noise inflates id estimates, while at coarser scales, estimates stabilize to lower, scale-invariant values. This paper introduces a novel, scalable, and parallelizable method called eDCF, which is based on Connectivity Factor (CF), a local connectivity-based metric, to robustly estimate intrinsic dimension across varying scales. Our method consistently matches leading estimators, achieving comparable values of mean absolute error (MAE) on synthetic benchmarks with noisy samples. Moreover, our approach also attains higher exact intrinsic dimension match rates, reaching up to 25.0% compared to 16.7% for MLE and 12.5% for TWO-NN, particularly excelling under medium to high noise levels and large datasets. Further, we showcase our method's ability to accurately detect fractal geometries in decision boundaries, confirming its utility for analyzing realistic, structured data.
△ Less
Submitted 18 October, 2025;
originally announced October 2025.
-
Which Rewards Matter? Reward Selection for Reinforcement Learning under Limited Feedback
Authors:
Shreyas Chaudhari,
Renhao Zhang,
Philip S. Thomas,
Bruno Castro da Silva
Abstract:
The ability of reinforcement learning algorithms to learn effective policies is determined by the rewards available during training. However, for practical problems, obtaining large quantities of reward labels is often infeasible due to computational or financial constraints, particularly when relying on human feedback. When reinforcement learning must proceed with limited feedback -- only a fract…
▽ More
The ability of reinforcement learning algorithms to learn effective policies is determined by the rewards available during training. However, for practical problems, obtaining large quantities of reward labels is often infeasible due to computational or financial constraints, particularly when relying on human feedback. When reinforcement learning must proceed with limited feedback -- only a fraction of samples get rewards labeled -- a fundamental question arises: which samples should be labeled to maximize policy performance? We formalize this problem of reward selection for reinforcement learning from limited feedback (RLLF), introducing a new problem formulation that facilitates the study of strategies for selecting impactful rewards. Two types of selection strategies are investigated: (i) heuristics that rely on reward-free information such as state visitation and partial value functions, and (ii) strategies pre-trained using auxiliary evaluative feedback. We find that critical subsets of rewards are those that (1) guide the agent along optimal trajectories, and (2) support recovery toward near-optimal behavior after deviations. Effective selection methods yield near-optimal policies with significantly fewer reward labels than full supervision, establishing reward selection as a powerful paradigm for scaling reinforcement learning in feedback-limited settings.
△ Less
Submitted 30 September, 2025;
originally announced October 2025.
-
Pretraining Large Language Models with NVFP4
Authors:
NVIDIA,
Felix Abecassis,
Anjulie Agrusa,
Dong Ahn,
Jonah Alben,
Stefania Alborghetti,
Michael Andersch,
Sivakumar Arayandi,
Alexis Bjorlin,
Aaron Blakeman,
Evan Briones,
Ian Buck,
Bryan Catanzaro,
Jinhang Choi,
Mike Chrzanowski,
Eric Chung,
Victor Cui,
Steve Dai,
Bita Darvish Rouhani,
Carlo del Mundo,
Deena Donia,
Burc Eryilmaz,
Henry Estela,
Abhinav Goel,
Oleg Goncharov
, et al. (64 additional authors not shown)
Abstract:
Large Language Models (LLMs) today are powerful problem solvers across many domains, and they continue to get stronger as they scale in model size, training set size, and training set quality, as shown by extensive research and experimentation across the industry. Training a frontier model today requires on the order of tens to hundreds of yottaflops, which is a massive investment of time, compute…
▽ More
Large Language Models (LLMs) today are powerful problem solvers across many domains, and they continue to get stronger as they scale in model size, training set size, and training set quality, as shown by extensive research and experimentation across the industry. Training a frontier model today requires on the order of tens to hundreds of yottaflops, which is a massive investment of time, compute, and energy. Improving pretraining efficiency is therefore essential to enable the next generation of even more capable LLMs. While 8-bit floating point (FP8) training is now widely adopted, transitioning to even narrower precision, such as 4-bit floating point (FP4), could unlock additional improvements in computational speed and resource utilization. However, quantization at this level poses challenges to training stability, convergence, and implementation, notably for large-scale models trained on long token horizons.
In this study, we introduce a novel approach for stable and accurate training of large language models (LLMs) using the NVFP4 format. Our method integrates Random Hadamard transforms (RHT) to bound block-level outliers, employs a two-dimensional quantization scheme for consistent representations across both the forward and backward passes, utilizes stochastic rounding for unbiased gradient estimation, and incorporates selective high-precision layers. We validate our approach by training a 12-billion-parameter model on 10 trillion tokens -- the longest publicly documented training run in 4-bit precision to date. Our results show that the model trained with our NVFP4-based pretraining technique achieves training loss and downstream task accuracies comparable to an FP8 baseline. These findings highlight that NVFP4, when combined with our training approach, represents a major step forward in narrow-precision LLM training algorithms.
△ Less
Submitted 29 September, 2025;
originally announced September 2025.
-
Classification of kinetic-related injury in hospital triage data using NLP
Authors:
Midhun Shyam,
Jim Basilakis,
Kieran Luken,
Steven Thomas,
John Crozier,
Paul M. Middleton,
X. Rosalind Wang
Abstract:
Triage notes, created at the start of a patient's hospital visit, contain a wealth of information that can help medical staff and researchers understand Emergency Department patient epidemiology and the degree of time-dependent illness or injury. Unfortunately, applying modern Natural Language Processing and Machine Learning techniques to analyse triage data faces some challenges: Firstly, hospita…
▽ More
Triage notes, created at the start of a patient's hospital visit, contain a wealth of information that can help medical staff and researchers understand Emergency Department patient epidemiology and the degree of time-dependent illness or injury. Unfortunately, applying modern Natural Language Processing and Machine Learning techniques to analyse triage data faces some challenges: Firstly, hospital data contains highly sensitive information that is subject to privacy regulation thus need to be analysed on site; Secondly, most hospitals and medical facilities lack the necessary hardware to fine-tune a Large Language Model (LLM), much less training one from scratch; Lastly, to identify the records of interest, expert inputs are needed to manually label the datasets, which can be time-consuming and costly. We present in this paper a pipeline that enables the classification of triage data using LLM and limited compute resources. We first fine-tuned a pre-trained LLM with a classifier using a small (2k) open sourced dataset on a GPU; and then further fine-tuned the model with a hospital specific dataset of 1000 samples on a CPU. We demonstrated that by carefully curating the datasets and leveraging existing models and open sourced data, we can successfully classify triage data with limited compute resources.
△ Less
Submitted 5 September, 2025;
originally announced September 2025.
-
The Future of Artificial Intelligence and the Mathematical and Physical Sciences (AI+MPS)
Authors:
Andrew Ferguson,
Marisa LaFleur,
Lars Ruthotto,
Jesse Thaler,
Yuan-Sen Ting,
Pratyush Tiwary,
Soledad Villar,
E. Paulo Alves,
Jeremy Avigad,
Simon Billinge,
Camille Bilodeau,
Keith Brown,
Emmanuel Candes,
Arghya Chattopadhyay,
Bingqing Cheng,
Jonathan Clausen,
Connor Coley,
Andrew Connolly,
Fred Daum,
Sijia Dong,
Chrisy Xiyu Du,
Cora Dvorkin,
Cristiano Fanelli,
Eric B. Ford,
Luis Manuel Frutos
, et al. (75 additional authors not shown)
Abstract:
This community paper developed out of the NSF Workshop on the Future of Artificial Intelligence (AI) and the Mathematical and Physics Sciences (MPS), which was held in March 2025 with the goal of understanding how the MPS domains (Astronomy, Chemistry, Materials Research, Mathematical Sciences, and Physics) can best capitalize on, and contribute to, the future of AI. We present here a summary and…
▽ More
This community paper developed out of the NSF Workshop on the Future of Artificial Intelligence (AI) and the Mathematical and Physics Sciences (MPS), which was held in March 2025 with the goal of understanding how the MPS domains (Astronomy, Chemistry, Materials Research, Mathematical Sciences, and Physics) can best capitalize on, and contribute to, the future of AI. We present here a summary and snapshot of the MPS community's perspective, as of Spring/Summer 2025, in a rapidly developing field. The link between AI and MPS is becoming increasingly inextricable; now is a crucial moment to strengthen the link between AI and Science by pursuing a strategy that proactively and thoughtfully leverages the potential of AI for scientific discovery and optimizes opportunities to impact the development of AI by applying concepts from fundamental science. To achieve this, we propose activities and strategic priorities that: (1) enable AI+MPS research in both directions; (2) build up an interdisciplinary community of AI+MPS researchers; and (3) foster education and workforce development in AI for MPS researchers and students. We conclude with a summary of suggested priorities for funding agencies, educational institutions, and individual researchers to help position the MPS community to be a leader in, and take full advantage of, the transformative potential of AI+MPS.
△ Less
Submitted 2 October, 2025; v1 submitted 2 September, 2025;
originally announced September 2025.
-
NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Authors:
NVIDIA,
:,
Aarti Basant,
Abhijit Khairnar,
Abhijit Paithankar,
Abhinav Khattar,
Adithya Renduchintala,
Aditya Malte,
Akhiad Bercovich,
Akshay Hazare,
Alejandra Rico,
Aleksander Ficek,
Alex Kondratenko,
Alex Shaposhnikov,
Alexander Bukharin,
Ali Taghibakhshi,
Amelia Barton,
Ameya Sunil Mahabaleshwarkar,
Amy Shen,
Andrew Tao,
Ann Guan,
Anna Shors,
Anubhav Mandarwal,
Arham Mehta,
Arun Venkatesan
, et al. (192 additional authors not shown)
Abstract:
We introduce Nemotron-Nano-9B-v2, a hybrid Mamba-Transformer language model designed to increase throughput for reasoning workloads while achieving state-of-the-art accuracy compared to similarly-sized models. Nemotron-Nano-9B-v2 builds on the Nemotron-H architecture, in which the majority of the self-attention layers in the common Transformer architecture are replaced with Mamba-2 layers, to achi…
▽ More
We introduce Nemotron-Nano-9B-v2, a hybrid Mamba-Transformer language model designed to increase throughput for reasoning workloads while achieving state-of-the-art accuracy compared to similarly-sized models. Nemotron-Nano-9B-v2 builds on the Nemotron-H architecture, in which the majority of the self-attention layers in the common Transformer architecture are replaced with Mamba-2 layers, to achieve improved inference speed when generating the long thinking traces needed for reasoning. We create Nemotron-Nano-9B-v2 by first pre-training a 12-billion-parameter model (Nemotron-Nano-12B-v2-Base) on 20 trillion tokens using an FP8 training recipe. After aligning Nemotron-Nano-12B-v2-Base, we employ the Minitron strategy to compress and distill the model with the goal of enabling inference on up to 128k tokens on a single NVIDIA A10G GPU (22GiB of memory, bfloat16 precision). Compared to existing similarly-sized models (e.g., Qwen3-8B), we show that Nemotron-Nano-9B-v2 achieves on-par or better accuracy on reasoning benchmarks while achieving up to 6x higher inference throughput in reasoning settings like 8k input and 16k output tokens. We are releasing Nemotron-Nano-9B-v2, Nemotron-Nano12B-v2-Base, and Nemotron-Nano-9B-v2-Base checkpoints along with the majority of our pre- and post-training datasets on Hugging Face.
△ Less
Submitted 2 September, 2025; v1 submitted 20 August, 2025;
originally announced August 2025.
-
From Transthoracic to Transesophageal: Cross-Modality Generation using LoRA Diffusion
Authors:
Emmanuel Oladokun,
Yuxuan Ou,
Anna Novikova,
Daria Kulikova,
Sarina Thomas,
Jurica Šprem,
Vicente Grau
Abstract:
Deep diffusion models excel at realistic image synthesis but demand large training sets-an obstacle in data-scarce domains like transesophageal echocardiography (TEE). While synthetic augmentation has boosted performance in transthoracic echo (TTE), TEE remains critically underrepresented, limiting the reach of deep learning in this high-impact modality.
We address this gap by adapting a TTE-tra…
▽ More
Deep diffusion models excel at realistic image synthesis but demand large training sets-an obstacle in data-scarce domains like transesophageal echocardiography (TEE). While synthetic augmentation has boosted performance in transthoracic echo (TTE), TEE remains critically underrepresented, limiting the reach of deep learning in this high-impact modality.
We address this gap by adapting a TTE-trained, mask-conditioned diffusion backbone to TEE with only a limited number of new cases and adapters as small as $10^5$ parameters. Our pipeline combines Low-Rank Adaptation with MaskR$^2$, a lightweight remapping layer that aligns novel mask formats with the pretrained model's conditioning channels. This design lets users adapt models to new datasets with a different set of anatomical structures to the base model's original set.
Through a targeted adaptation strategy, we find that adapting only MLP layers suffices for high-fidelity TEE synthesis. Finally, mixing less than 200 real TEE frames with our synthetic echoes improves the dice score on a multiclass segmentation task, particularly boosting performance on underrepresented right-heart structures. Our results demonstrate that (1) semantically controlled TEE images can be generated with low overhead, (2) MaskR$^2$ effectively transforms unseen mask formats into compatible formats without damaging downstream task performance, and (3) our method generates images that are effective for improving performance on a downstream task of multiclass segmentation.
△ Less
Submitted 18 August, 2025;
originally announced August 2025.
-
WiseLVAM: A Novel Framework For Left Ventricle Automatic Measurements
Authors:
Durgesh Kumar Singh,
Qing Cao,
Sarina Thomas,
Ahcène Boubekki,
Robert Jenssen,
Michael Kampffmeyer
Abstract:
Clinical guidelines recommend performing left ventricular (LV) linear measurements in B-mode echocardiographic images at the basal level -- typically at the mitral valve leaflet tips -- and aligned perpendicular to the LV long axis along a virtual scanline (SL). However, most automated methods estimate landmarks directly from B-mode images for the measurement task, where even small shifts in predi…
▽ More
Clinical guidelines recommend performing left ventricular (LV) linear measurements in B-mode echocardiographic images at the basal level -- typically at the mitral valve leaflet tips -- and aligned perpendicular to the LV long axis along a virtual scanline (SL). However, most automated methods estimate landmarks directly from B-mode images for the measurement task, where even small shifts in predicted points along the LV walls can lead to significant measurement errors, reducing their clinical reliability. A recent semi-automatic method, EnLVAM, addresses this limitation by constraining landmark prediction to a clinician-defined SL and training on generated Anatomical Motion Mode (AMM) images to predict LV landmarks along the same. To enable full automation, a contour-aware SL placement approach is proposed in this work, in which the LV contour is estimated using a weakly supervised B-mode landmark detector. SL placement is then performed by inferring the LV long axis and the basal level- mimicking clinical guidelines. Building on this foundation, we introduce \textit{WiseLVAM} -- a novel, fully automated yet manually adaptable framework for automatically placing the SL and then automatically performing the LV linear measurements in the AMM mode. \textit{WiseLVAM} utilizes the structure-awareness from B-mode images and the motion-awareness from AMM mode to enhance robustness and accuracy with the potential to provide a practical solution for the routine clinical application. The source code is publicly available at https://github.com/SFI-Visual-Intelligence/wiselvam.git.
△ Less
Submitted 15 September, 2025; v1 submitted 16 August, 2025;
originally announced August 2025.
-
The State Of TTS: A Case Study with Human Fooling Rates
Authors:
Praveen Srinivasa Varadhan,
Sherry Thomas,
Sai Teja M. S.,
Suvrat Bhooshan,
Mitesh M. Khapra
Abstract:
While subjective evaluations in recent years indicate rapid progress in TTS, can current TTS systems truly pass a human deception test in a Turing-like evaluation? We introduce Human Fooling Rate (HFR), a metric that directly measures how often machine-generated speech is mistaken for human. Our large-scale evaluation of open-source and commercial TTS models reveals critical insights: (i) CMOS-bas…
▽ More
While subjective evaluations in recent years indicate rapid progress in TTS, can current TTS systems truly pass a human deception test in a Turing-like evaluation? We introduce Human Fooling Rate (HFR), a metric that directly measures how often machine-generated speech is mistaken for human. Our large-scale evaluation of open-source and commercial TTS models reveals critical insights: (i) CMOS-based claims of human parity often fail under deception testing, (ii) TTS progress should be benchmarked on datasets where human speech achieves high HFRs, as evaluating against monotonous or less expressive reference samples sets a low bar, (iii) Commercial models approach human deception in zero-shot settings, while open-source systems still struggle with natural conversational speech; (iv) Fine-tuning on high-quality data improves realism but does not fully bridge the gap. Our findings underscore the need for more realistic, human-centric evaluations alongside existing subjective tests.
△ Less
Submitted 6 August, 2025;
originally announced August 2025.
-
Generating Novelty in Open-World Multi-Agent Strategic Board Games
Authors:
Mayank Kejriwal,
Shilpa Thomas
Abstract:
We describe GNOME (Generating Novelty in Open-world Multi-agent Environments), an experimental platform that is designed to test the effectiveness of multi-agent AI systems when faced with \emph{novelty}. GNOME separates the development of AI gameplaying agents with the simulator, allowing \emph{unanticipated} novelty (in essence, novelty that is not subject to model-selection bias). Using a Web G…
▽ More
We describe GNOME (Generating Novelty in Open-world Multi-agent Environments), an experimental platform that is designed to test the effectiveness of multi-agent AI systems when faced with \emph{novelty}. GNOME separates the development of AI gameplaying agents with the simulator, allowing \emph{unanticipated} novelty (in essence, novelty that is not subject to model-selection bias). Using a Web GUI, GNOME was recently demonstrated at NeurIPS 2020 using the game of Monopoly to foster an open discussion on AI robustness and the nature of novelty in real-world environments. In this article, we further detail the key elements of the demonstration, and also provide an overview of the experimental design that is being currently used in the DARPA Science of Artificial Intelligence and Learning for Open-World Novelty (SAIL-ON) program to evaluate external teams developing novelty-adaptive gameplaying agents.
△ Less
Submitted 4 July, 2025;
originally announced July 2025.
-
Reinforcement Learning from Human Feedback with High-Confidence Safety Constraints
Authors:
Yaswanth Chittepu,
Blossom Metevier,
Will Schwarzer,
Austin Hoag,
Scott Niekum,
Philip S. Thomas
Abstract:
Existing approaches to language model alignment often treat safety as a tradeoff against helpfulness, which can lead to unacceptable responses in sensitive domains. To ensure reliable performance in such settings, we propose High-Confidence Safe Reinforcement Learning from Human Feedback (HC-RLHF), a method that provides high-confidence safety guarantees while maximizing helpfulness. Similar to pr…
▽ More
Existing approaches to language model alignment often treat safety as a tradeoff against helpfulness, which can lead to unacceptable responses in sensitive domains. To ensure reliable performance in such settings, we propose High-Confidence Safe Reinforcement Learning from Human Feedback (HC-RLHF), a method that provides high-confidence safety guarantees while maximizing helpfulness. Similar to previous methods, HC-RLHF explicitly decouples human preferences into helpfulness and harmlessness (safety), which are learned by training a reward model and a cost model, respectively. It then employs a two-step process to find safe solutions. In the first step, it optimizes the reward function under an intentionally pessimistic version of the cost constraint. In the second step, the trained model undergoes a safety test to verify whether its performance stays within an upper-confidence bound of the actual cost constraint. We provide a theoretical analysis of HC-RLHF, including proof that it will not return an unsafe solution with a probability greater than a user-specified threshold. For our empirical analysis, we apply HC-RLHF to align three different language models (Qwen2-1.5B, Qwen2.5-3B, and LLaMa3.2-3B) with human preferences. Our results demonstrate that HC-RLHF produces safe models with high probability and can improve harmlessness and helpfulness compared to previous methods.
△ Less
Submitted 9 June, 2025;
originally announced June 2025.
-
Reality Check: A New Evaluation Ecosystem Is Necessary to Understand AI's Real World Effects
Authors:
Reva Schwartz,
Rumman Chowdhury,
Akash Kundu,
Heather Frase,
Marzieh Fadaee,
Tom David,
Gabriella Waters,
Afaf Taik,
Morgan Briggs,
Patrick Hall,
Shomik Jain,
Kyra Yee,
Spencer Thomas,
Sundeep Bhandari,
Paul Duncan,
Andrew Thompson,
Maya Carlyle,
Qinghua Lu,
Matthew Holmes,
Theodora Skeadas
Abstract:
Conventional AI evaluation approaches concentrated within the AI stack exhibit systemic limitations for exploring, navigating and resolving the human and societal factors that play out in real world deployment such as in education, finance, healthcare, and employment sectors. AI capability evaluations can capture detail about first-order effects, such as whether immediate system outputs are accura…
▽ More
Conventional AI evaluation approaches concentrated within the AI stack exhibit systemic limitations for exploring, navigating and resolving the human and societal factors that play out in real world deployment such as in education, finance, healthcare, and employment sectors. AI capability evaluations can capture detail about first-order effects, such as whether immediate system outputs are accurate, or contain toxic, biased or stereotypical content, but AI's second-order effects, i.e. any long-term outcomes and consequences that may result from AI use in the real world, have become a significant area of interest as the technology becomes embedded in our daily lives. These secondary effects can include shifts in user behavior, societal, cultural and economic ramifications, workforce transformations, and long-term downstream impacts that may result from a broad and growing set of risks. This position paper argues that measuring the indirect and secondary effects of AI will require expansion beyond static, single-turn approaches conducted in silico to include testing paradigms that can capture what actually materializes when people use AI technology in context. Specifically, we describe the need for data and methods that can facilitate contextual awareness and enable downstream interpretation and decision making about AI's secondary effects, and recommend requirements for a new ecosystem.
△ Less
Submitted 30 May, 2025; v1 submitted 24 May, 2025;
originally announced May 2025.
-
RASMALAI: Resources for Adaptive Speech Modeling in Indian Languages with Accents and Intonations
Authors:
Ashwin Sankar,
Yoach Lacombe,
Sherry Thomas,
Praveen Srinivasa Varadhan,
Sanchit Gandhi,
Mitesh M Khapra
Abstract:
We introduce RASMALAI, a large-scale speech dataset with rich text descriptions, designed to advance controllable and expressive text-to-speech (TTS) synthesis for 23 Indian languages and English. It comprises 13,000 hours of speech and 24 million text-description annotations with fine-grained attributes like speaker identity, accent, emotion, style, and background conditions. Using RASMALAI, we d…
▽ More
We introduce RASMALAI, a large-scale speech dataset with rich text descriptions, designed to advance controllable and expressive text-to-speech (TTS) synthesis for 23 Indian languages and English. It comprises 13,000 hours of speech and 24 million text-description annotations with fine-grained attributes like speaker identity, accent, emotion, style, and background conditions. Using RASMALAI, we develop IndicParlerTTS, the first open-source, text-description-guided TTS for Indian languages. Systematic evaluation demonstrates its ability to generate high-quality speech for named speakers, reliably follow text descriptions and accurately synthesize specified attributes. Additionally, it effectively transfers expressive characteristics both within and across languages. IndicParlerTTS consistently achieves strong performance across these evaluations, setting a new standard for controllable multilingual expressive speech synthesis in Indian languages.
△ Less
Submitted 27 May, 2025; v1 submitted 24 May, 2025;
originally announced May 2025.
-
Qualia Optimization
Authors:
Philip S. Thomas
Abstract:
This report explores the speculative question: what if current or future AI systems have qualia, such as pain or pleasure? It does so by assuming that AI systems might someday possess qualia -- and that the quality of these subjective experiences should be considered alongside performance metrics. Concrete mathematical problem settings, inspired by reinforcement learning formulations and theories…
▽ More
This report explores the speculative question: what if current or future AI systems have qualia, such as pain or pleasure? It does so by assuming that AI systems might someday possess qualia -- and that the quality of these subjective experiences should be considered alongside performance metrics. Concrete mathematical problem settings, inspired by reinforcement learning formulations and theories from philosophy of mind, are then proposed and initial approaches and properties are presented. These properties enable refinement of the problem setting, culminating with the proposal of methods that promote reinforcement.
△ Less
Submitted 15 May, 2025;
originally announced May 2025.
-
Omni-R1: Do You Really Need Audio to Fine-Tune Your Audio LLM?
Authors:
Andrew Rouditchenko,
Saurabhchand Bhati,
Edson Araujo,
Samuel Thomas,
Hilde Kuehne,
Rogerio Feris,
James Glass
Abstract:
We propose Omni-R1 which fine-tunes a recent multi-modal LLM, Qwen2.5-Omni, on an audio question answering dataset with the reinforcement learning method GRPO. This leads to new State-of-the-Art performance on the recent MMAU and MMAR benchmarks. Omni-R1 achieves the highest accuracies on the sounds, music, speech, and overall average categories, both on the Test-mini and Test-full splits. To unde…
▽ More
We propose Omni-R1 which fine-tunes a recent multi-modal LLM, Qwen2.5-Omni, on an audio question answering dataset with the reinforcement learning method GRPO. This leads to new State-of-the-Art performance on the recent MMAU and MMAR benchmarks. Omni-R1 achieves the highest accuracies on the sounds, music, speech, and overall average categories, both on the Test-mini and Test-full splits. To understand the performance improvement, we tested models both with and without audio and found that much of the performance improvement from GRPO could be attributed to better text-based reasoning. We also made a surprising discovery that fine-tuning without audio on a text-only dataset was effective at improving the audio-based performance.
△ Less
Submitted 2 June, 2025; v1 submitted 14 May, 2025;
originally announced May 2025.
-
Streaming Krylov-Accelerated Stochastic Gradient Descent
Authors:
Stephen Thomas
Abstract:
We present SKA-SGD (Streaming Krylov-Accelerated Stochastic Gradient Descent), a novel optimization approach that accelerates convergence for ill-conditioned problems by projecting stochastic gradients onto a low-dimensional Krylov subspace. Directly inspired by recent advances in s-step Conjugate Gradient methods with streaming Gauss-Seidel Gram solvers \cite{dambra2025sstep}, our method extends…
▽ More
We present SKA-SGD (Streaming Krylov-Accelerated Stochastic Gradient Descent), a novel optimization approach that accelerates convergence for ill-conditioned problems by projecting stochastic gradients onto a low-dimensional Krylov subspace. Directly inspired by recent advances in s-step Conjugate Gradient methods with streaming Gauss-Seidel Gram solvers \cite{dambra2025sstep}, our method extends these techniques to the stochastic optimization domain. Our approach combines three key innovations: (1) projection coefficients computed via a single streaming Gauss-Seidel iteration, which is mathematically equivalent to Modified Gram-Schmidt orthogonalization; (2) a Chebyshev polynomial basis for constructing the Krylov subspace, providing superior numerical stability; and (3) efficient implementation for AMD GPUs using HIP. We prove that our streaming approach achieves a backward error near machine precision with $O(s^2)$ complexity rather than $O(s^3)$, where $s$ is the Krylov subspace dimension. Experimental results demonstrate that SKA-SGD significantly outperforms standard SGD and Adam in convergence rate and final error, particularly for problems with condition numbers exceeding $10^3$. GPU performance analysis reveals a crossover point where communication-avoiding benefits outweigh computational overhead, typically occurring at moderate scale ($p \approx 64$ processors) for problem sizes $n \geq 10^6$.
△ Less
Submitted 11 May, 2025;
originally announced May 2025.
-
CAV-MAE Sync: Improving Contrastive Audio-Visual Mask Autoencoders via Fine-Grained Alignment
Authors:
Edson Araujo,
Andrew Rouditchenko,
Yuan Gong,
Saurabhchand Bhati,
Samuel Thomas,
Brian Kingsbury,
Leonid Karlinsky,
Rogerio Feris,
James R. Glass,
Hilde Kuehne
Abstract:
Recent advances in audio-visual learning have shown promising results in learning representations across modalities. However, most approaches rely on global audio representations that fail to capture fine-grained temporal correspondences with visual frames. Additionally, existing methods often struggle with conflicting optimization objectives when trying to jointly learn reconstruction and cross-m…
▽ More
Recent advances in audio-visual learning have shown promising results in learning representations across modalities. However, most approaches rely on global audio representations that fail to capture fine-grained temporal correspondences with visual frames. Additionally, existing methods often struggle with conflicting optimization objectives when trying to jointly learn reconstruction and cross-modal alignment. In this work, we propose CAV-MAE Sync as a simple yet effective extension of the original CAV-MAE framework for self-supervised audio-visual learning. We address three key challenges: First, we tackle the granularity mismatch between modalities by treating audio as a temporal sequence aligned with video frames, rather than using global representations. Second, we resolve conflicting optimization goals by separating contrastive and reconstruction objectives through dedicated global tokens. Third, we improve spatial localization by introducing learnable register tokens that reduce semantic load on patch tokens. We evaluate the proposed approach on AudioSet, VGG Sound, and the ADE20K Sound dataset on zero-shot retrieval, classification and localization tasks demonstrating state-of-the-art performance and outperforming more complex architectures.
△ Less
Submitted 21 May, 2025; v1 submitted 2 May, 2025;
originally announced May 2025.
-
Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Authors:
NVIDIA,
:,
Aaron Blakeman,
Aarti Basant,
Abhinav Khattar,
Adithya Renduchintala,
Akhiad Bercovich,
Aleksander Ficek,
Alexis Bjorlin,
Ali Taghibakhshi,
Amala Sanjay Deshmukh,
Ameya Sunil Mahabaleshwarkar,
Andrew Tao,
Anna Shors,
Ashwath Aithal,
Ashwin Poojary,
Ayush Dattagupta,
Balaram Buddharaju,
Bobby Chen,
Boris Ginsburg,
Boxin Wang,
Brandon Norick,
Brian Butterfield,
Bryan Catanzaro,
Carlo del Mundo
, et al. (176 additional authors not shown)
Abstract:
As inference-time scaling becomes critical for enhanced reasoning capabilities, it is increasingly becoming important to build models that are efficient to infer. We introduce Nemotron-H, a family of 8B and 56B/47B hybrid Mamba-Transformer models designed to reduce inference cost for a given accuracy level. To achieve this goal, we replace the majority of self-attention layers in the common Transf…
▽ More
As inference-time scaling becomes critical for enhanced reasoning capabilities, it is increasingly becoming important to build models that are efficient to infer. We introduce Nemotron-H, a family of 8B and 56B/47B hybrid Mamba-Transformer models designed to reduce inference cost for a given accuracy level. To achieve this goal, we replace the majority of self-attention layers in the common Transformer model architecture with Mamba layers that perform constant computation and require constant memory per generated token. We show that Nemotron-H models offer either better or on-par accuracy compared to other similarly-sized state-of-the-art open-sourced Transformer models (e.g., Qwen-2.5-7B/72B and Llama-3.1-8B/70B), while being up to 3$\times$ faster at inference. To further increase inference speed and reduce the memory required at inference time, we created Nemotron-H-47B-Base from the 56B model using a new compression via pruning and distillation technique called MiniPuzzle. Nemotron-H-47B-Base achieves similar accuracy to the 56B model, but is 20% faster to infer. In addition, we introduce an FP8-based training recipe and show that it can achieve on par results with BF16-based training. This recipe is used to train the 56B model. We are releasing Nemotron-H base model checkpoints with support in Hugging Face and NeMo.
△ Less
Submitted 5 September, 2025; v1 submitted 4 April, 2025;
originally announced April 2025.
-
Quantifying the uncertainty of model-based synthetic image quality metrics
Authors:
Ciaran Bench,
Spencer A. Thomas
Abstract:
The quality of synthetically generated images (e.g. those produced by diffusion models) are often evaluated using information about image contents encoded by pretrained auxiliary models. For example, the Fréchet Inception Distance (FID) uses embeddings from an InceptionV3 model pretrained to classify ImageNet. The effectiveness of this feature embedding model has considerable impact on the trustwo…
▽ More
The quality of synthetically generated images (e.g. those produced by diffusion models) are often evaluated using information about image contents encoded by pretrained auxiliary models. For example, the Fréchet Inception Distance (FID) uses embeddings from an InceptionV3 model pretrained to classify ImageNet. The effectiveness of this feature embedding model has considerable impact on the trustworthiness of the calculated metric (affecting its suitability in several domains, including medical imaging). Here, uncertainty quantification (UQ) is used to provide a heuristic measure of the trustworthiness of the feature embedding model and an FID-like metric called the Fréchet Autoencoder Distance (FAED). We apply Monte Carlo dropout to a feature embedding model (convolutional autoencoder) to model the uncertainty in its embeddings. The distribution of embeddings for each input are then used to compute a distribution of FAED values. We express uncertainty as the predictive variance of the embeddings as well as the standard deviation of the computed FAED values. We find that their magnitude correlates with the extent to which the inputs are out-of-distribution to the model's training data, providing some validation of its ability to assess the trustworthiness of the FAED.
△ Less
Submitted 4 April, 2025;
originally announced April 2025.
-
Are Deep Speech Denoising Models Robust to Adversarial Noise?
Authors:
Will Schwarzer,
Philip S. Thomas,
Andrea Fanelli,
Xiaoyu Liu
Abstract:
Deep noise suppression (DNS) models enjoy widespread use throughout a variety of high-stakes speech applications. However, in this paper, we show that four recent DNS models can each be reduced to outputting unintelligible gibberish through the addition of imperceptible adversarial noise. Furthermore, our results show the near-term plausibility of targeted attacks, which could induce models to out…
▽ More
Deep noise suppression (DNS) models enjoy widespread use throughout a variety of high-stakes speech applications. However, in this paper, we show that four recent DNS models can each be reduced to outputting unintelligible gibberish through the addition of imperceptible adversarial noise. Furthermore, our results show the near-term plausibility of targeted attacks, which could induce models to output arbitrary utterances, and over-the-air attacks. While the success of these attacks varies by model and setting, and attacks appear to be strongest when model-specific (i.e., white-box and non-transferable), our results highlight a pressing need for practical countermeasures in DNS systems.
△ Less
Submitted 14 March, 2025;
originally announced March 2025.
-
Supervised Reward Inference
Authors:
Will Schwarzer,
Jordan Schneider,
Philip S. Thomas,
Scott Niekum
Abstract:
Existing approaches to reward inference from behavior typically assume that humans provide demonstrations according to specific models of behavior. However, humans often indicate their goals through a wide range of behaviors, from actions that are suboptimal due to poor planning or execution to behaviors which are intended to communicate goals rather than achieve them. We propose that supervised l…
▽ More
Existing approaches to reward inference from behavior typically assume that humans provide demonstrations according to specific models of behavior. However, humans often indicate their goals through a wide range of behaviors, from actions that are suboptimal due to poor planning or execution to behaviors which are intended to communicate goals rather than achieve them. We propose that supervised learning offers a unified framework to infer reward functions from any class of behavior, and show that such an approach is asymptotically Bayes-optimal under mild assumptions. Experiments on simulated robotic manipulation tasks show that our method can efficiently infer rewards from a wide variety of arbitrarily suboptimal demonstrations.
△ Less
Submitted 25 February, 2025;
originally announced February 2025.
-
Aspect-Oriented Summarization for Psychiatric Short-Term Readmission Prediction
Authors:
WonJin Yoon,
Boyu Ren,
Spencer Thomas,
Chanhwi Kim,
Guergana Savova,
Mei-Hua Hall,
Timothy Miller
Abstract:
Recent progress in large language models (LLMs) has enabled the automated processing of lengthy documents even without supervised training on a task-specific dataset. Yet, their zero-shot performance in complex tasks as opposed to straightforward information extraction tasks remains suboptimal. One feasible approach for tasks with lengthy, complex input is to first summarize the document and then…
▽ More
Recent progress in large language models (LLMs) has enabled the automated processing of lengthy documents even without supervised training on a task-specific dataset. Yet, their zero-shot performance in complex tasks as opposed to straightforward information extraction tasks remains suboptimal. One feasible approach for tasks with lengthy, complex input is to first summarize the document and then apply supervised fine-tuning to the summary. However, the summarization process inevitably results in some loss of information. In this study we present a method for processing the summaries of long documents aimed to capture different important aspects of the original document. We hypothesize that LLM summaries generated with different aspect-oriented prompts contain different information signals, and we propose methods to measure these differences. We introduce approaches to effectively integrate signals from these different summaries for supervised training of transformer models. We validate our hypotheses on a high-impact task -- 30-day readmission prediction from a psychiatric discharge -- using real-world data from four hospitals, and show that our proposed method increases the prediction performance for the complex task of predicting patient outcome.
△ Less
Submitted 10 November, 2025; v1 submitted 14 February, 2025;
originally announced February 2025.
-
Style transfer as data augmentation: evaluating unpaired image-to-image translation models in mammography
Authors:
Emir Ahmed,
Spencer A. Thomas,
Ciaran Bench
Abstract:
Several studies indicate that deep learning models can learn to detect breast cancer from mammograms (X-ray images of the breasts). However, challenges with overfitting and poor generalisability prevent their routine use in the clinic. Models trained on data from one patient population may not perform well on another due to differences in their data domains, emerging due to variations in scanning…
▽ More
Several studies indicate that deep learning models can learn to detect breast cancer from mammograms (X-ray images of the breasts). However, challenges with overfitting and poor generalisability prevent their routine use in the clinic. Models trained on data from one patient population may not perform well on another due to differences in their data domains, emerging due to variations in scanning technology or patient characteristics. Data augmentation techniques can be used to improve generalisability by expanding the diversity of feature representations in the training data by altering existing examples. Image-to-image translation models are one approach capable of imposing the characteristic feature representations (i.e. style) of images from one dataset onto another. However, evaluating model performance is non-trivial, particularly in the absence of ground truths (a common reality in medical imaging). Here, we describe some key aspects that should be considered when evaluating style transfer algorithms, highlighting the advantages and disadvantages of popular metrics, and important factors to be mindful of when implementing them in practice. We consider two types of generative models: a cycle-consistent generative adversarial network (CycleGAN) and a diffusion-based SynDiff model. We learn unpaired image-to-image translation across three mammography datasets. We highlight that undesirable aspects of model performance may determine the suitability of some metrics, and also provide some analysis indicating the extent to which various metrics assess unique aspects of model performance. We emphasise the need to use several metrics for a comprehensive assessment of model performance.
△ Less
Submitted 4 February, 2025;
originally announced February 2025.
-
mWhisper-Flamingo for Multilingual Audio-Visual Noise-Robust Speech Recognition
Authors:
Andrew Rouditchenko,
Samuel Thomas,
Hilde Kuehne,
Rogerio Feris,
James Glass
Abstract:
Audio-Visual Speech Recognition (AVSR) combines lip-based video with audio and can improve performance in noise, but most methods are trained only on English data. One limitation is the lack of large-scale multilingual video data, which makes it hard to train models from scratch. In this work, we propose mWhisper-Flamingo for multilingual AVSR which combines the strengths of a pre-trained audio mo…
▽ More
Audio-Visual Speech Recognition (AVSR) combines lip-based video with audio and can improve performance in noise, but most methods are trained only on English data. One limitation is the lack of large-scale multilingual video data, which makes it hard to train models from scratch. In this work, we propose mWhisper-Flamingo for multilingual AVSR which combines the strengths of a pre-trained audio model (Whisper) and video model (AV-HuBERT). To enable better multi-modal integration and improve the noisy multilingual performance, we introduce decoder modality dropout where the model is trained both on paired audio-visual inputs and separate audio/visual inputs. mWhisper-Flamingo achieves state-of-the-art WER on MuAViC, an AVSR dataset of 9 languages. Audio-visual mWhisper-Flamingo consistently outperforms audio-only Whisper on all languages in noisy conditions.
△ Less
Submitted 7 May, 2025; v1 submitted 3 February, 2025;
originally announced February 2025.
-
Trustworthy image-to-image translation: evaluating uncertainty calibration in unpaired training scenarios
Authors:
Ciaran Bench,
Emir Ahmed,
Spencer A. Thomas
Abstract:
Mammographic screening is an effective method for detecting breast cancer, facilitating early diagnosis. However, the current need to manually inspect images places a heavy burden on healthcare systems, spurring a desire for automated diagnostic protocols. Techniques based on deep neural networks have been shown effective in some studies, but their tendency to overfit leaves considerable risk for…
▽ More
Mammographic screening is an effective method for detecting breast cancer, facilitating early diagnosis. However, the current need to manually inspect images places a heavy burden on healthcare systems, spurring a desire for automated diagnostic protocols. Techniques based on deep neural networks have been shown effective in some studies, but their tendency to overfit leaves considerable risk for poor generalisation and misdiagnosis, preventing their widespread adoption in clinical settings. Data augmentation schemes based on unpaired neural style transfer models have been proposed that improve generalisability by diversifying the representations of training image features in the absence of paired training data (images of the same tissue in either image style). But these models are similarly prone to various pathologies, and evaluating their performance is challenging without ground truths/large datasets (as is often the case in medical imaging). Here, we consider two frameworks/architectures: a GAN-based cycleGAN, and the more recently developed diffusion-based SynDiff. We evaluate their performance when trained on image patches parsed from three open access mammography datasets and one non-medical image dataset. We consider the use of uncertainty quantification to assess model trustworthiness, and propose a scheme to evaluate calibration quality in unpaired training scenarios. This ultimately helps facilitate the trustworthy use of image-to-image translation models in domains where ground truths are not typically available.
△ Less
Submitted 29 January, 2025;
originally announced January 2025.
-
Torque Responsive Metamaterials Enable High Payload Soft Robot Arms
Authors:
Ian Good,
Srivatsan Balaji,
David Oh,
Sawyer Thomas,
Jeffrey I. Lipton
Abstract:
Soft robots have struggled to support large forces and moments while also supporting their own weight against gravity. This limits their ability to reach certain configurations necessary for tasks such as inspection and pushing objects up. We have overcome this limitation by creating an electrically driven metamaterial soft arm using handed shearing auxetics (HSA) and bendable extendable torque re…
▽ More
Soft robots have struggled to support large forces and moments while also supporting their own weight against gravity. This limits their ability to reach certain configurations necessary for tasks such as inspection and pushing objects up. We have overcome this limitation by creating an electrically driven metamaterial soft arm using handed shearing auxetics (HSA) and bendable extendable torque resistant (BETR) shafts. These use the large force and torque capacity of HSAs and the nestable torque transmission of BETRs to create a strong soft arm. We found that the HSA arm was able to push 2.3 kg vertically and lift more than 600 g when positioned horizontally, supporting 0.33 Nm of torque at the base. The arm is able to move between waypoints while carrying the large payload and demonstrates consistent movement with path variance below 5 mm. The HSA arm's ability to perform active grasping with HSA grippers was also demonstrated, requiring 20 N of pull force to dislodge the object. Finally, we test the arm in a pipe inspection task. The arm is able to locate all the defects while sliding against the inner surface of the pipe, demonstrating its compliance.
△ Less
Submitted 16 January, 2025;
originally announced January 2025.
-
A Non-autoregressive Model for Joint STT and TTS
Authors:
Vishal Sunder,
Brian Kingsbury,
George Saon,
Samuel Thomas,
Slava Shechtman,
Hagai Aronowitz,
Eric Fosler-Lussier,
Luis Lastras
Abstract:
In this paper, we take a step towards jointly modeling automatic speech recognition (STT) and speech synthesis (TTS) in a fully non-autoregressive way. We develop a novel multimodal framework capable of handling the speech and text modalities as input either individually or together. The proposed model can also be trained with unpaired speech or text data owing to its multimodal nature. We further…
▽ More
In this paper, we take a step towards jointly modeling automatic speech recognition (STT) and speech synthesis (TTS) in a fully non-autoregressive way. We develop a novel multimodal framework capable of handling the speech and text modalities as input either individually or together. The proposed model can also be trained with unpaired speech or text data owing to its multimodal nature. We further propose an iterative refinement strategy to improve the STT and TTS performance of our model such that the partial hypothesis at the output can be fed back to the input of our model, thus iteratively improving both STT and TTS predictions. We show that our joint model can effectively perform both STT and TTS tasks, outperforming the STT-specific baseline in all tasks and performing competitively with the TTS-specific baseline across a wide range of evaluation metrics.
△ Less
Submitted 20 January, 2025; v1 submitted 15 January, 2025;
originally announced January 2025.
-
EchoNarrator: Generating natural text explanations for ejection fraction predictions
Authors:
Sarina Thomas,
Qing Cao,
Anna Novikova,
Daria Kulikova,
Guy Ben-Yosef
Abstract:
Ejection fraction (EF) of the left ventricle (LV) is considered as one of the most important measurements for diagnosing acute heart failure and can be estimated during cardiac ultrasound acquisition. While recent successes in deep learning research successfully estimate EF values, the proposed models often lack an explanation for the prediction. However, providing clear and intuitive explanations…
▽ More
Ejection fraction (EF) of the left ventricle (LV) is considered as one of the most important measurements for diagnosing acute heart failure and can be estimated during cardiac ultrasound acquisition. While recent successes in deep learning research successfully estimate EF values, the proposed models often lack an explanation for the prediction. However, providing clear and intuitive explanations for clinical measurement predictions would increase the trust of cardiologists in these models. In this paper, we explore predicting EF measurements with Natural Language Explanation (NLE). We propose a model that in a single forward pass combines estimation of the LV contour over multiple frames, together with a set of modules and routines for computing various motion and shape attributes that are associated with ejection fraction. It then feeds the attributes into a large language model to generate text that helps to explain the network's outcome in a human-like manner. We provide experimental evaluation of our explanatory output, as well as EF prediction, and show that our model can provide EF comparable to state-of-the-art together with meaningful and accurate natural language explanation to the prediction. The project page can be found at https://github.com/guybenyosef/EchoNarrator .
△ Less
Submitted 31 October, 2024;
originally announced October 2024.
-
Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy Evaluation
Authors:
Shreyas Chaudhari,
Ameet Deshpande,
Bruno Castro da Silva,
Philip S. Thomas
Abstract:
Evaluating policies using off-policy data is crucial for applying reinforcement learning to real-world problems such as healthcare and autonomous driving. Previous methods for off-policy evaluation (OPE) generally suffer from high variance or irreducible bias, leading to unacceptably high prediction errors. In this work, we introduce STAR, a framework for OPE that encompasses a broad range of esti…
▽ More
Evaluating policies using off-policy data is crucial for applying reinforcement learning to real-world problems such as healthcare and autonomous driving. Previous methods for off-policy evaluation (OPE) generally suffer from high variance or irreducible bias, leading to unacceptably high prediction errors. In this work, we introduce STAR, a framework for OPE that encompasses a broad range of estimators -- which include existing OPE methods as special cases -- that achieve lower mean squared prediction errors. STAR leverages state abstraction to distill complex, potentially continuous problems into compact, discrete models which we call abstract reward processes (ARPs). Predictions from ARPs estimated from off-policy data are provably consistent (asymptotically correct). Rather than proposing a specific estimator, we present a new framework for OPE and empirically demonstrate that estimators within STAR outperform existing methods. The best STAR estimator outperforms baselines in all twelve cases studied, and even the median STAR estimator surpasses the baselines in seven out of the twelve cases.
△ Less
Submitted 2 October, 2024;
originally announced October 2024.
-
IndicVoices-R: Unlocking a Massive Multilingual Multi-speaker Speech Corpus for Scaling Indian TTS
Authors:
Ashwin Sankar,
Srija Anand,
Praveen Srinivasa Varadhan,
Sherry Thomas,
Mehak Singal,
Shridhar Kumar,
Deovrat Mehendale,
Aditi Krishana,
Giri Raju,
Mitesh Khapra
Abstract:
Recent advancements in text-to-speech (TTS) synthesis show that large-scale models trained with extensive web data produce highly natural-sounding output. However, such data is scarce for Indian languages due to the lack of high-quality, manually subtitled data on platforms like LibriVox or YouTube. To address this gap, we enhance existing large-scale ASR datasets containing natural conversations…
▽ More
Recent advancements in text-to-speech (TTS) synthesis show that large-scale models trained with extensive web data produce highly natural-sounding output. However, such data is scarce for Indian languages due to the lack of high-quality, manually subtitled data on platforms like LibriVox or YouTube. To address this gap, we enhance existing large-scale ASR datasets containing natural conversations collected in low-quality environments to generate high-quality TTS training data. Our pipeline leverages the cross-lingual generalization of denoising and speech enhancement models trained on English and applied to Indian languages. This results in IndicVoices-R (IV-R), the largest multilingual Indian TTS dataset derived from an ASR dataset, with 1,704 hours of high-quality speech from 10,496 speakers across 22 Indian languages. IV-R matches the quality of gold-standard TTS datasets like LJSpeech, LibriTTS, and IndicTTS. We also introduce the IV-R Benchmark, the first to assess zero-shot, few-shot, and many-shot speaker generalization capabilities of TTS models on Indian voices, ensuring diversity in age, gender, and style. We demonstrate that fine-tuning an English pre-trained model on a combined dataset of high-quality IndicTTS and our IV-R dataset results in better zero-shot speaker generalization compared to fine-tuning on the IndicTTS dataset alone. Further, our evaluation reveals limited zero-shot generalization for Indian voices in TTS models trained on prior datasets, which we improve by fine-tuning the model on our data containing diverse set of speakers across language families. We open-source all data and code, releasing the first TTS model for all 22 official Indian languages.
△ Less
Submitted 7 October, 2024; v1 submitted 9 September, 2024;
originally announced September 2024.
-
Unbalanced Fingerprint Classification for Hybrid Fingerprint Orientation Maps
Authors:
Ravi Prakash,
Sinnu Susan Thomas
Abstract:
This paper introduces a novel fingerprint classification technique based on a multi-layered fuzzy logic classifier. We target the cause of missed detection by identifying the fingerprints at an early stage among dry, standard, and wet. Scanned images are classified based on clarity correlated with the proposed feature points. We also propose a novel adaptive algorithm based on eigenvector space fo…
▽ More
This paper introduces a novel fingerprint classification technique based on a multi-layered fuzzy logic classifier. We target the cause of missed detection by identifying the fingerprints at an early stage among dry, standard, and wet. Scanned images are classified based on clarity correlated with the proposed feature points. We also propose a novel adaptive algorithm based on eigenvector space for generating new samples to overcome the multiclass imbalance. Proposed methods improve the performance of ensemble learners. It was also found that the new approach performs better than the neural-network based classification methods. Early-stage improvements give a suitable dataset for fingerprint detection models. Leveraging the novel classifier, the best set of `standard' labelled fingerprints is used to generate a unique hybrid fingerprint orientation map (HFOM). We introduce a novel min-rotate max-flow optimization method inspired by the min-cut max-flow algorithm. The unique properties of HFOM generation introduce a new use case for biometric data protection by using HFOM as a virtual proxy of fingerprints.
△ Less
Submitted 1 September, 2024;
originally announced September 2024.
-
Design and architecture of the IBM Quantum Engine Compiler
Authors:
Michael B. Healy,
Reza Jokar,
Soolu Thomas,
Vincent R. Pascuzzi,
Kit Barton,
Thomas A. Alexander,
Roy Elkabetz,
Brian C. Donovan,
Hiroshi Horii,
Marius Hillenbrand
Abstract:
In this work, we describe the design and architecture of the open-source Quantum Engine Compiler (qe-compiler) currently used in production for IBM Quantum systems. The qe-compiler is built using LLVM's Multi-Level Intermediate Representation (MLIR) framework and includes definitions for several dialects to represent parameterized quantum computation at multiple levels of abstraction. The compiler…
▽ More
In this work, we describe the design and architecture of the open-source Quantum Engine Compiler (qe-compiler) currently used in production for IBM Quantum systems. The qe-compiler is built using LLVM's Multi-Level Intermediate Representation (MLIR) framework and includes definitions for several dialects to represent parameterized quantum computation at multiple levels of abstraction. The compiler also provides Python bindings and a diagnostic system. An open-source LALR lexer and parser built using Bison and Flex generates an Abstract Syntax Tree that is translated to a high-level MLIR dialect. An extensible hierarchical target system for modeling the heterogeneous nature of control systems at compilation time is included. Target-based and generic compilation passes are added using a pipeline interface to translate the input down to low-level intermediate representations (including LLVM IR) and can take advantage of LLVM backends and tooling to generate machine executable binaries. The qe-compiler is built to be extensible, maintainable, performant, and scalable to support the future of quantum computing.
△ Less
Submitted 12 August, 2024;
originally announced August 2024.
-
Lower Bounds for Approximate (& Exact) k-Disjoint-Shortest-Paths
Authors:
Rajesh Chitnis,
Samuel Thomas,
Anthony Wirth
Abstract:
Given a graph $G=(V,E)$ and a set $T=\{ (s_i, t_i) : 1\leq i\leq k \}\subseteq V\times V$ of $k$ pairs, the $k$-vertex-disjoint-paths (resp. $k$-edge-disjoint-paths) problem asks to determine whether there exist~$k$ pairwise vertex-disjoint (resp. edge-disjoint) paths $P_1, P_2, ..., P_k$ in $G$ such that, for each $1\leq i\leq k$, $P_i$ connects $s_i$ to $t_i$. Both the edge-disjoint and vertex-d…
▽ More
Given a graph $G=(V,E)$ and a set $T=\{ (s_i, t_i) : 1\leq i\leq k \}\subseteq V\times V$ of $k$ pairs, the $k$-vertex-disjoint-paths (resp. $k$-edge-disjoint-paths) problem asks to determine whether there exist~$k$ pairwise vertex-disjoint (resp. edge-disjoint) paths $P_1, P_2, ..., P_k$ in $G$ such that, for each $1\leq i\leq k$, $P_i$ connects $s_i$ to $t_i$. Both the edge-disjoint and vertex-disjoint versions in undirected graphs are famously known to be FPT (parameterized by $k$) due to the Graph Minor Theory of Robertson and Seymour. Eilam-Tzoreff [DAM `98] introduced a variant, known as the $k$-disjoint-shortest-paths problem, where each individual path is further required to be a shortest path connecting its pair. They showed that the $k$-disjoint-shortest-paths problem is NP-complete on both directed and undirected graphs; this holds even if the graphs are planar and have unit edge lengths. We focus on four versions of the problem, corresponding to considering edge/vertex disjointness, and to considering directed/undirected graphs. Building on the reduction of Chitnis [SIDMA `23] for $k$-edge-disjoint-paths on planar DAGs, we obtain the following inapproximability lower bound for each of the four versions of $k$-disjoint-shortest-paths on $n$-vertex graphs: - Under Gap-ETH, there exists a constant $δ>0$ such that for any constant $0<ε\leq \frac{1}{2}$ and any computable function $f$, there is no $(\frac{1}{2}+ε)$-approx in $f(k)\cdot n^{δ\cdot k}$ time. We further strengthen our results as follows: Directed: Inapprox lower bound for edge-disjoint (resp. vertex-disjoint) paths holds even if the input graph is a planar (resp. 1-planar) DAG with max in-degree and max out-degree at most $2$. Undirected: Inapprox lower bound for edge-disjoint (resp. vertex-disjoint) paths hold even if the input graph is planar (resp. 1-planar) and has max degree $4$.
△ Less
Submitted 7 August, 2024;
originally announced August 2024.
-
Influence of Personality Traits on Plagiarism Through Collusion in Programming Assignments
Authors:
Parthasarathy PD,
Ishaan Kapoor,
Swaroop Joshi,
Sujith Thomas
Abstract:
Educating students about academic integrity expectations has been suggested as one of the ways to reduce malpractice in take-home programming assignments. We test this hypothesis using data collected from an artificial intelligence course with 105 participants (N=105) at a university in India. The AI course had two programming assignments. Plagiarism through collusion was quantified using the Meas…
▽ More
Educating students about academic integrity expectations has been suggested as one of the ways to reduce malpractice in take-home programming assignments. We test this hypothesis using data collected from an artificial intelligence course with 105 participants (N=105) at a university in India. The AI course had two programming assignments. Plagiarism through collusion was quantified using the Measure of Software Similarity (MOSS) tool. Students were educated about what constitutes academic dishonesty and were required to take an honor pledge before the start of the second take-home programming assignment. The two programming assignments were novel and did not have solutions available on the internet. We expected the mean percentage of similar lines of code to be significantly less in the second programming assignment. However, our results show no significant difference in the mean percentage of similar lines of code across the two programming assignments. We also study how the Big-five personality traits affect the propensity for plagiarism in the two take-home assignments. Our results across both assignments show that the extraversion trait of the Big Five personality exhibits a positive association, and the conscientiousness trait exhibits a negative association with plagiarism tendencies. Our result suggests that the policy of educating students about academic integrity will have a limited impact as long as students perceive an opportunity for plagiarism to be present. We explain our results using the Fraud triangle model.
△ Less
Submitted 29 June, 2024;
originally announced July 2024.
-
Position: Benchmarking is Limited in Reinforcement Learning Research
Authors:
Scott M. Jordan,
Adam White,
Bruno Castro da Silva,
Martha White,
Philip S. Thomas
Abstract:
Novel reinforcement learning algorithms, or improvements on existing ones, are commonly justified by evaluating their performance on benchmark environments and are compared to an ever-changing set of standard algorithms. However, despite numerous calls for improvements, experimental practices continue to produce misleading or unsupported claims. One reason for the ongoing substandard practices is…
▽ More
Novel reinforcement learning algorithms, or improvements on existing ones, are commonly justified by evaluating their performance on benchmark environments and are compared to an ever-changing set of standard algorithms. However, despite numerous calls for improvements, experimental practices continue to produce misleading or unsupported claims. One reason for the ongoing substandard practices is that conducting rigorous benchmarking experiments requires substantial computational time. This work investigates the sources of increased computation costs in rigorous experiment designs. We show that conducting rigorous performance benchmarks will likely have computational costs that are often prohibitive. As a result, we argue for using an additional experimentation paradigm to overcome the limitations of benchmarking.
△ Less
Submitted 23 June, 2024;
originally announced June 2024.
-
Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation
Authors:
Andrew Rouditchenko,
Yuan Gong,
Samuel Thomas,
Leonid Karlinsky,
Hilde Kuehne,
Rogerio Feris,
James Glass
Abstract:
Audio-Visual Speech Recognition (AVSR) uses lip-based video to improve performance in noise. Since videos are harder to obtain than audio, the video training data of AVSR models is usually limited to a few thousand hours. In contrast, speech models such as Whisper are trained with hundreds of thousands of hours of data, and thus learn a better speech-to-text decoder. The huge training data differe…
▽ More
Audio-Visual Speech Recognition (AVSR) uses lip-based video to improve performance in noise. Since videos are harder to obtain than audio, the video training data of AVSR models is usually limited to a few thousand hours. In contrast, speech models such as Whisper are trained with hundreds of thousands of hours of data, and thus learn a better speech-to-text decoder. The huge training data difference motivates us to adapt Whisper to handle video inputs. Inspired by Flamingo which injects visual features into language models, we propose Whisper-Flamingo which integrates visual features into the Whisper speech recognition and translation model with gated cross attention. Our models achieve state-of-the-art ASR WER (0.68%) and AVSR WER (0.76%) on LRS3, and state-of-the-art ASR WER (1.3%) and AVSR WER (1.4%) on LRS2. Audio-visual Whisper-Flamingo outperforms audio-only Whisper on English speech recognition and En-X translation for 6 languages in noisy conditions. Moreover, Whisper-Flamingo is versatile and conducts all of these tasks using one set of parameters, while prior methods are trained separately on each language.
△ Less
Submitted 19 November, 2024; v1 submitted 14 June, 2024;
originally announced June 2024.
-
ICU-Sepsis: A Benchmark MDP Built from Real Medical Data
Authors:
Kartik Choudhary,
Dhawal Gupta,
Philip S. Thomas
Abstract:
We present ICU-Sepsis, an environment that can be used in benchmarks for evaluating reinforcement learning (RL) algorithms. Sepsis management is a complex task that has been an important topic in applied RL research in recent years. Therefore, MDPs that model sepsis management can serve as part of a benchmark to evaluate RL algorithms on a challenging real-world problem. However, creating usable M…
▽ More
We present ICU-Sepsis, an environment that can be used in benchmarks for evaluating reinforcement learning (RL) algorithms. Sepsis management is a complex task that has been an important topic in applied RL research in recent years. Therefore, MDPs that model sepsis management can serve as part of a benchmark to evaluate RL algorithms on a challenging real-world problem. However, creating usable MDPs that simulate sepsis care in the ICU remains a challenge due to the complexities involved in acquiring and processing patient data. ICU-Sepsis is a lightweight environment that models personalized care of sepsis patients in the ICU. The environment is a tabular MDP that is widely compatible and is challenging even for state-of-the-art RL algorithms, making it a valuable tool for benchmarking their performance. However, we emphasize that while ICU-Sepsis provides a standardized environment for evaluating RL algorithms, it should not be used to draw conclusions that guide medical practice.
△ Less
Submitted 14 October, 2024; v1 submitted 9 June, 2024;
originally announced June 2024.
-
Improving Fairness in Credit Lending Models using Subgroup Threshold Optimization
Authors:
Cecilia Ying,
Stephen Thomas
Abstract:
In an effort to improve the accuracy of credit lending decisions, many financial intuitions are now using predictions from machine learning models. While such predictions enjoy many advantages, recent research has shown that the predictions have the potential to be biased and unfair towards certain subgroups of the population. To combat this, several techniques have been introduced to help remove…
▽ More
In an effort to improve the accuracy of credit lending decisions, many financial intuitions are now using predictions from machine learning models. While such predictions enjoy many advantages, recent research has shown that the predictions have the potential to be biased and unfair towards certain subgroups of the population. To combat this, several techniques have been introduced to help remove the bias and improve the overall fairness of the predictions. We introduce a new fairness technique, called \textit{Subgroup Threshold Optimizer} (\textit{STO}), that does not require any alternations to the input training data nor does it require any changes to the underlying machine learning algorithm, and thus can be used with any existing machine learning pipeline. STO works by optimizing the classification thresholds for individual subgroups in order to minimize the overall discrimination score between them. Our experiments on a real-world credit lending dataset show that STO can reduce gender discrimination by over 90\%.
△ Less
Submitted 15 March, 2024;
originally announced March 2024.
-
Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap
Authors:
Saurabh Srivastava,
Annarose M B,
Anto P V,
Shashank Menon,
Ajay Sukumar,
Adwaith Samod T,
Alan Philipose,
Stevin Prince,
Sooraj Thomas
Abstract:
We propose a framework for robust evaluation of reasoning capabilities of language models, using functional variants of benchmarks. Models that solve a reasoning test should exhibit no difference in performance over the static version of a problem compared to a snapshot of the functional variant. We have rewritten the relevant fragment of the MATH benchmark into its functional variant MATH(), with…
▽ More
We propose a framework for robust evaluation of reasoning capabilities of language models, using functional variants of benchmarks. Models that solve a reasoning test should exhibit no difference in performance over the static version of a problem compared to a snapshot of the functional variant. We have rewritten the relevant fragment of the MATH benchmark into its functional variant MATH(), with functionalization of other benchmarks to follow. When evaluating current state-of-the-art models over snapshots of MATH(), we find a reasoning gap -- the percentage difference between the static and functional accuracies. We find reasoning gaps from 58.35% to 80.31% among the state-of-the-art closed and open weights models that perform well on static benchmarks, with the caveat that the gaps are likely to be smaller with more sophisticated prompting strategies. Here we show that models which anecdotally have good reasoning performance over real-world tasks, have quantifiable lower gaps, motivating the open problem of building "gap 0" models. Code for evaluation and new evaluation datasets, three MATH() snapshots, are publicly available at https://github.com/consequentai/fneval/.
△ Less
Submitted 29 February, 2024;
originally announced February 2024.
-
Graph Convolutional Neural Networks for Automated Echocardiography View Recognition: A Holistic Approach
Authors:
Sarina Thomas,
Cristiana Tiago,
Børge Solli Andreassen,
Svein Arne Aase,
Jurica Šprem,
Erik Steen,
Anne Solberg,
Guy Ben-Yosef
Abstract:
To facilitate diagnosis on cardiac ultrasound (US), clinical practice has established several standard views of the heart, which serve as reference points for diagnostic measurements and define viewports from which images are acquired. Automatic view recognition involves grouping those images into classes of standard views. Although deep learning techniques have been successful in achieving this,…
▽ More
To facilitate diagnosis on cardiac ultrasound (US), clinical practice has established several standard views of the heart, which serve as reference points for diagnostic measurements and define viewports from which images are acquired. Automatic view recognition involves grouping those images into classes of standard views. Although deep learning techniques have been successful in achieving this, they still struggle with fully verifying the suitability of an image for specific measurements due to factors like the correct location, pose, and potential occlusions of cardiac structures. Our approach goes beyond view classification and incorporates a 3D mesh reconstruction of the heart that enables several more downstream tasks, like segmentation and pose estimation. In this work, we explore learning 3D heart meshes via graph convolutions, using similar techniques to learn 3D meshes in natural images, such as human pose estimation. As the availability of fully annotated 3D images is limited, we generate synthetic US images from 3D meshes by training an adversarial denoising diffusion model. Experiments were conducted on synthetic and clinical cases for view recognition and structure detection. The approach yielded good performance on synthetic images and, despite being exclusively trained on synthetic data, it already showed potential when applied to clinical images. With this proof-of-concept, we aim to demonstrate the benefits of graphs to improve cardiac view recognition that can ultimately lead to better efficiency in cardiac diagnosis.
△ Less
Submitted 1 March, 2024; v1 submitted 29 February, 2024;
originally announced February 2024.
-
Cluster Metric Sensitivity to Irrelevant Features
Authors:
Miles McCrory,
Spencer A. Thomas
Abstract:
Clustering algorithms are used extensively in data analysis for data exploration and discovery. Technological advancements lead to continually growth of data in terms of volume, dimensionality and complexity. This provides great opportunities in data analytics as the data can be interrogated for many different purposes. This however leads challenges, such as identification of relevant features for…
▽ More
Clustering algorithms are used extensively in data analysis for data exploration and discovery. Technological advancements lead to continually growth of data in terms of volume, dimensionality and complexity. This provides great opportunities in data analytics as the data can be interrogated for many different purposes. This however leads challenges, such as identification of relevant features for a given task. In supervised tasks, one can utilise a number of methods to optimise the input features for the task objective (e.g. classification accuracy). In unsupervised problems, such tools are not readily available, in part due to an inability to quantify feature relevance in unlabeled tasks. In this paper, we investigate the sensitivity of clustering performance noisy uncorrelated variables iteratively added to baseline datasets with well defined clusters. We show how different types of irrelevant variables can impact the outcome of a clustering result from $k$-means in different ways. We observe a resilience to very high proportions of irrelevant features for adjusted rand index (ARI) and normalised mutual information (NMI) when the irrelevant features are Gaussian distributed. For Uniformly distributed irrelevant features, we notice the resilience of ARI and NMI is dependent on the dimensionality of the data and exhibits tipping points between high scores and near zero. Our results show that the Silhouette Coefficient and the Davies-Bouldin score are the most sensitive to irrelevant added features exhibiting large changes in score for comparably low proportions of irrelevant features regardless of underlying distribution or data scaling. As such the Silhouette Coefficient and the Davies-Bouldin score are good candidates for optimising feature selection in unsupervised clustering tasks.
△ Less
Submitted 19 February, 2024;
originally announced February 2024.
-
HGOT: Hierarchical Graph of Thoughts for Retrieval-Augmented In-Context Learning in Factuality Evaluation
Authors:
Yihao Fang,
Stephen W. Thomas,
Xiaodan Zhu
Abstract:
With the widespread adoption of large language models (LLMs) in numerous applications, the challenge of factuality and the propensity for hallucinations has emerged as a significant concern. To address this issue, particularly in retrieval-augmented in-context learning, we introduce the hierarchical graph of thoughts (HGOT), a structured, multi-layered graph approach designed to enhance the retrie…
▽ More
With the widespread adoption of large language models (LLMs) in numerous applications, the challenge of factuality and the propensity for hallucinations has emerged as a significant concern. To address this issue, particularly in retrieval-augmented in-context learning, we introduce the hierarchical graph of thoughts (HGOT), a structured, multi-layered graph approach designed to enhance the retrieval of pertinent passages during in-context learning. The framework utilizes the emergent planning capabilities of LLMs, employing the divide-and-conquer strategy to break down complex queries into manageable sub-queries. It refines self-consistency majority voting for answer selection, which incorporates the recently proposed citation recall and precision metrics to assess the quality of thoughts, linking an answer's credibility intrinsically to the thought's quality. This methodology introduces a weighted system in majority voting, prioritizing answers based on the citation quality of their thoughts. Additionally, we propose a scoring mechanism for evaluating retrieved passages, considering factors such as citation frequency and quality, self-consistency confidence, and the retrieval module's ranking. Experiments indicate that HGOT excels as a versatile approach, outperforming competing models in FEVER by up to $7\%$ and matching leading models such as Retrieve-then-Read in Open-SQuAD, and DSP in HotPotQA, demonstrating its efficacy in enhancing LLMs' factuality.
△ Less
Submitted 2 July, 2024; v1 submitted 14 February, 2024;
originally announced February 2024.
-
Thousands of AI Authors on the Future of AI
Authors:
Katja Grace,
Harlan Stewart,
Julia Fabienne Sandkühler,
Stephen Thomas,
Ben Weinstein-Raun,
Jan Brauner,
Richard C. Korzekwa
Abstract:
In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating…
▽ More
In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey).
Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.
△ Less
Submitted 8 October, 2025; v1 submitted 5 January, 2024;
originally announced January 2024.
-
From Past to Future: Rethinking Eligibility Traces
Authors:
Dhawal Gupta,
Scott M. Jordan,
Shreyas Chaudhari,
Bo Liu,
Philip S. Thomas,
Bruno Castro da Silva
Abstract:
In this paper, we introduce a fresh perspective on the challenges of credit assignment and policy evaluation. First, we delve into the nuances of eligibility traces and explore instances where their updates may result in unexpected credit assignment to preceding states. From this investigation emerges the concept of a novel value function, which we refer to as the \emph{bidirectional value functio…
▽ More
In this paper, we introduce a fresh perspective on the challenges of credit assignment and policy evaluation. First, we delve into the nuances of eligibility traces and explore instances where their updates may result in unexpected credit assignment to preceding states. From this investigation emerges the concept of a novel value function, which we refer to as the \emph{bidirectional value function}. Unlike traditional state value functions, bidirectional value functions account for both future expected returns (rewards anticipated from the current state onward) and past expected returns (cumulative rewards from the episode's start to the present). We derive principled update equations to learn this value function and, through experimentation, demonstrate its efficacy in enhancing the process of policy evaluation. In particular, our results indicate that the proposed learning approach can, in certain challenging contexts, perform policy evaluation more rapidly than TD($λ$) -- a method that learns forward value functions, $v^π$, \emph{directly}. Overall, our findings present a new perspective on eligibility traces and potential advantages associated with the novel value function it inspires, especially for policy evaluation.
△ Less
Submitted 20 December, 2023;
originally announced December 2023.
-
Behavior Alignment via Reward Function Optimization
Authors:
Dhawal Gupta,
Yash Chandak,
Scott M. Jordan,
Philip S. Thomas,
Bruno Castro da Silva
Abstract:
Designing reward functions for efficiently guiding reinforcement learning (RL) agents toward specific behaviors is a complex task. This is challenging since it requires the identification of reward structures that are not sparse and that avoid inadvertently inducing undesirable behaviors. Naively modifying the reward structure to offer denser and more frequent feedback can lead to unintended outco…
▽ More
Designing reward functions for efficiently guiding reinforcement learning (RL) agents toward specific behaviors is a complex task. This is challenging since it requires the identification of reward structures that are not sparse and that avoid inadvertently inducing undesirable behaviors. Naively modifying the reward structure to offer denser and more frequent feedback can lead to unintended outcomes and promote behaviors that are not aligned with the designer's intended goal. Although potential-based reward shaping is often suggested as a remedy, we systematically investigate settings where deploying it often significantly impairs performance. To address these issues, we introduce a new framework that uses a bi-level objective to learn \emph{behavior alignment reward functions}. These functions integrate auxiliary rewards reflecting a designer's heuristics and domain knowledge with the environment's primary rewards. Our approach automatically determines the most effective way to blend these types of feedback, thereby enhancing robustness against heuristic reward misspecification. Remarkably, it can also adapt an agent's policy optimization process to mitigate suboptimalities resulting from limitations and biases inherent in the underlying RL algorithms. We evaluate our method's efficacy on a diverse set of tasks, from small-scale experiments to high-dimensional control challenges. We investigate heuristic auxiliary rewards of varying quality -- some of which are beneficial and others detrimental to the learning process. Our results show that our framework offers a robust and principled way to integrate designer-specified heuristics. It not only addresses key shortcomings of existing approaches but also consistently leads to high-performing solutions, even when given misaligned or poorly-specified auxiliary reward functions.
△ Less
Submitted 31 October, 2023; v1 submitted 29 October, 2023;
originally announced October 2023.
-
Learning Fair Representations with High-Confidence Guarantees
Authors:
Yuhong Luo,
Austin Hoag,
Philip S. Thomas
Abstract:
Representation learning is increasingly employed to generate representations that are predictive across multiple downstream tasks. The development of representation learning algorithms that provide strong fairness guarantees is thus important because it can prevent unfairness towards disadvantaged groups for all downstream prediction tasks. To prevent unfairness towards disadvantaged groups in all…
▽ More
Representation learning is increasingly employed to generate representations that are predictive across multiple downstream tasks. The development of representation learning algorithms that provide strong fairness guarantees is thus important because it can prevent unfairness towards disadvantaged groups for all downstream prediction tasks. To prevent unfairness towards disadvantaged groups in all downstream tasks, it is crucial to provide representation learning algorithms that provide fairness guarantees. In this paper, we formally define the problem of learning representations that are fair with high confidence. We then introduce the Fair Representation learning with high-confidence Guarantees (FRG) framework, which provides high-confidence guarantees for limiting unfairness across all downstream models and tasks, with user-defined upper bounds. After proving that FRG ensures fairness for all downstream models and tasks with high probability, we present empirical evaluations that demonstrate FRG's effectiveness at upper bounding unfairness for multiple downstream models and tasks.
△ Less
Submitted 23 October, 2023;
originally announced October 2023.
-
Towards Robust Cardiac Segmentation using Graph Convolutional Networks
Authors:
Gilles Van De Vyver,
Sarina Thomas,
Guy Ben-Yosef,
Sindre Hellum Olaisen,
Håvard Dalen,
Lasse Løvstakken,
Erik Smistad
Abstract:
Fully automatic cardiac segmentation can be a fast and reproducible method to extract clinical measurements from an echocardiography examination. The U-Net architecture is the current state-of-the-art deep learning architecture for medical segmentation and can segment cardiac structures in real-time with average errors comparable to inter-observer variability. However, this architecture still gene…
▽ More
Fully automatic cardiac segmentation can be a fast and reproducible method to extract clinical measurements from an echocardiography examination. The U-Net architecture is the current state-of-the-art deep learning architecture for medical segmentation and can segment cardiac structures in real-time with average errors comparable to inter-observer variability. However, this architecture still generates large outliers that are often anatomically incorrect. This work uses the concept of graph convolutional neural networks that predict the contour points of the structures of interest instead of labeling each pixel. We propose a graph architecture that uses two convolutional rings based on cardiac anatomy and show that this eliminates anatomical incorrect multi-structure segmentations on the publicly available CAMUS dataset. Additionally, this work contributes with an ablation study on the graph convolutional architecture and an evaluation of clinical measurements on the clinical HUNT4 dataset. Finally, we propose to use the inter-model agreement of the U-Net and the graph network as a predictor of both the input and segmentation quality. We show this predictor can detect out-of-distribution and unsuitable input images in real-time. Source code is available online: https://github.com/gillesvntnu/GCN_multistructure
△ Less
Submitted 2 July, 2024; v1 submitted 2 October, 2023;
originally announced October 2023.
-
ChatGPT as Data Augmentation for Compositional Generalization: A Case Study in Open Intent Detection
Authors:
Yihao Fang,
Xianzhi Li,
Stephen W. Thomas,
Xiaodan Zhu
Abstract:
Open intent detection, a crucial aspect of natural language understanding, involves the identification of previously unseen intents in user-generated text. Despite the progress made in this field, challenges persist in handling new combinations of language components, which is essential for compositional generalization. In this paper, we present a case study exploring the use of ChatGPT as a data…
▽ More
Open intent detection, a crucial aspect of natural language understanding, involves the identification of previously unseen intents in user-generated text. Despite the progress made in this field, challenges persist in handling new combinations of language components, which is essential for compositional generalization. In this paper, we present a case study exploring the use of ChatGPT as a data augmentation technique to enhance compositional generalization in open intent detection tasks. We begin by discussing the limitations of existing benchmarks in evaluating this problem, highlighting the need for constructing datasets for addressing compositional generalization in open intent detection tasks. By incorporating synthetic data generated by ChatGPT into the training process, we demonstrate that our approach can effectively improve model performance. Rigorous evaluation of multiple benchmarks reveals that our method outperforms existing techniques and significantly enhances open intent detection capabilities. Our findings underscore the potential of large language models like ChatGPT for data augmentation in natural language understanding tasks.
△ Less
Submitted 25 August, 2023;
originally announced August 2023.