-
Online Convex Optimization with Switching Cost with Only One Single Gradient Evaluation
Authors:
Harsh Shah,
Purna Chandrasekhar,
Rahul Vaze
Abstract:
Online convex optimization with switching cost is considered under the frugal information setting where at time $t$, before action $x_t$ is taken, only a single function evaluation and a single gradient is available at the previously chosen action $x_{t-1}$ for either the current cost function $f_t$ or the most recent cost function $f_{t-1}$. When the switching cost is linear, online algorithms wi…
▽ More
Online convex optimization with switching cost is considered under the frugal information setting where at time $t$, before action $x_t$ is taken, only a single function evaluation and a single gradient is available at the previously chosen action $x_{t-1}$ for either the current cost function $f_t$ or the most recent cost function $f_{t-1}$. When the switching cost is linear, online algorithms with optimal order-wise competitive ratios are derived for the frugal setting. When the gradient information is noisy, an online algorithm whose competitive ratio grows quadratically with the noise magnitude is derived.
△ Less
Submitted 5 July, 2025;
originally announced July 2025.
-
Toward Autonomous UI Exploration: The UIExplorer Benchmark
Authors:
Andrei Cristian Nica,
Akshaya Vishnu Kudlu Shanbhogue,
Harshil Shah,
Aleix Cambray,
Tudor Berariu,
Lucas Maystre,
David Barber
Abstract:
Autonomous agents must know how to explore user interfaces (UIs) for reliable task solving, yet systematic evaluation of this crucial phase is lacking. We introduce UIExplore-Bench, the first benchmark explicitly dedicated to UI exploration. The benchmark evaluates agents with either Structured mode (granting access to layout information like DOM trees) or Screen mode (relying on GUI-only observat…
▽ More
Autonomous agents must know how to explore user interfaces (UIs) for reliable task solving, yet systematic evaluation of this crucial phase is lacking. We introduce UIExplore-Bench, the first benchmark explicitly dedicated to UI exploration. The benchmark evaluates agents with either Structured mode (granting access to layout information like DOM trees) or Screen mode (relying on GUI-only observations such as screenshots and human-like mouse/keyboard interactions) across three levels in a standardized GitLab sandbox environment. We formalize exploration as the process of maximizing the set of actionable UI components discovered and propose a metric, human-normalized UI-Functionalities Observed (hUFO), to quantify the effectiveness of exploration. Our results show that UIExplore-AlGo achieves the leading mean hUFO scores, reaching up to 77.2% of human performance in Structured mode and 59.0% in Screen mode at 2,000 steps, particularly excelling at the Sparse level. The results highlight the relevance of our benchmark, as current agents show a substantial performance gap compared to one hour of human expert exploration, indicating ample room for future advancements. We publicly release the benchmark environment, an exploration dataset, and an evaluation suite to catalyze research into efficient UI exploration strategies and their downstream applications, such as experience-driven task completion and automated training data generation.
△ Less
Submitted 21 June, 2025;
originally announced June 2025.
-
Dual Protection Ring: User Profiling Via Differential Privacy and Service Dissemination Through Private Information Retrieval
Authors:
Imdad Ullah,
Najm Hassan,
Tariq Ahamed Ahangar,
Zawar Hussain Shah,
Mehregan Mahdavi,
Andrew Levula
Abstract:
User profiling is crucial in providing personalised services, as it relies on analysing user behaviour and preferences to deliver targeted services. This approach enhances user experience and promotes heightened engagement. Nevertheless, user profiling also gives rise to noteworthy privacy considerations due to the extensive tracking and monitoring of personal data, potentially leading to surveill…
▽ More
User profiling is crucial in providing personalised services, as it relies on analysing user behaviour and preferences to deliver targeted services. This approach enhances user experience and promotes heightened engagement. Nevertheless, user profiling also gives rise to noteworthy privacy considerations due to the extensive tracking and monitoring of personal data, potentially leading to surveillance or identity theft. We propose a dual-ring protection mechanism to protect user privacy by examining various threats to user privacy, such as behavioural attacks, profiling fingerprinting and monitoring, profile perturbation, etc., both on the user and service provider sides. We develop user profiles that contain sensitive private attributes and an equivalent profile based on differential privacy for evaluating personalised services. We determine the entropy of the resultant profiles during each update to protect profiling attributes and invoke various processes, such as data evaporation, to artificially increase entropy or destroy private profiling attributes. Furthermore, we use different variants of private information retrieval (PIR) to retrieve personalised services against differentially private profiles. We implement critical components of the proposed model via a proof-of-concept mobile app to demonstrate its applicability over a specific case study of advertising services, which can be generalised to other services. Our experimental results show that the observed processing delays with different PIR schemes are similar to the current advertising systems.
△ Less
Submitted 16 June, 2025;
originally announced June 2025.
-
Towards Energy-Efficient and Low-Latency Voice-Controlled Smart Homes: A Proposal for Offline Speech Recognition and IoT Integration
Authors:
Peng Huang,
Imdad Ullah,
Xiaotong Wei,
Tariq Ahamed Ahanger,
Najm Hassan,
Zawar Hussain Shah
Abstract:
The smart home systems, based on AI speech recognition and IoT technology, enable people to control devices through verbal commands and make people's lives more efficient. However, existing AI speech recognition services are primarily deployed on cloud platforms on the Internet. When users issue a command, speech recognition devices like ``Amazon Echo'' will post a recording through numerous netwo…
▽ More
The smart home systems, based on AI speech recognition and IoT technology, enable people to control devices through verbal commands and make people's lives more efficient. However, existing AI speech recognition services are primarily deployed on cloud platforms on the Internet. When users issue a command, speech recognition devices like ``Amazon Echo'' will post a recording through numerous network nodes, reach multiple servers, and then receive responses through the Internet. This mechanism presents several issues, including unnecessary energy consumption, communication latency, and the risk of a single-point failure. In this position paper, we propose a smart home concept based on offline speech recognition and IoT technology: 1) integrating offline keyword spotting (KWS) technologies into household appliances with limited resource hardware to enable them to understand user voice commands; 2) designing a local IoT network with decentralized architecture to manage and connect various devices, enhancing the robustness and scalability of the system. This proposal of a smart home based on offline speech recognition and IoT technology will allow users to use low-latency voice control anywhere in the home without depending on the Internet and provide better scalability and energy sustainability.
△ Less
Submitted 11 June, 2025; v1 submitted 9 June, 2025;
originally announced June 2025.
-
The Optimization Paradox in Clinical AI Multi-Agent Systems
Authors:
Suhana Bedi,
Iddah Mlauzi,
Daniel Shin,
Sanmi Koyejo,
Nigam H. Shah
Abstract:
Multi-agent artificial intelligence systems are increasingly deployed in clinical settings, yet the relationship between component-level optimization and system-wide performance remains poorly understood. We evaluated this relationship using 2,400 real patient cases from the MIMIC-CDM dataset across four abdominal pathologies (appendicitis, pancreatitis, cholecystitis, diverticulitis), decomposing…
▽ More
Multi-agent artificial intelligence systems are increasingly deployed in clinical settings, yet the relationship between component-level optimization and system-wide performance remains poorly understood. We evaluated this relationship using 2,400 real patient cases from the MIMIC-CDM dataset across four abdominal pathologies (appendicitis, pancreatitis, cholecystitis, diverticulitis), decomposing clinical diagnosis into information gathering, interpretation, and differential diagnosis. We evaluated single agent systems (one model performing all tasks) against multi-agent systems (specialized models for each task) using comprehensive metrics spanning diagnostic outcomes, process adherence, and cost efficiency. Our results reveal a paradox: while multi-agent systems generally outperformed single agents, the component-optimized or Best of Breed system with superior components and excellent process metrics (85.5% information accuracy) significantly underperformed in diagnostic accuracy (67.7% vs. 77.4% for a top multi-agent system). This finding underscores that successful integration of AI in healthcare requires not just component level optimization but also attention to information flow and compatibility between agents. Our findings highlight the need for end to end system validation rather than relying on component metrics alone.
△ Less
Submitted 11 June, 2025; v1 submitted 6 June, 2025;
originally announced June 2025.
-
Analysis of cost-efficiency of serverless approaches
Authors:
Nakhat Syeda,
Harsh Shah,
Rajvinder Singh,
Suraj Jaju,
Sumedha Kumar,
Gourav Chhabra,
Maria Spichkova
Abstract:
In this paper, we present a survey of research studies related to the cost-effectiveness of serverless approach and corresponding cost savings. We conducted a systematic literature review using Google Scholar search engine, covering the period from 2010 to 2024. We identified 34 related studies, from which we extracted 17 parameters that might influence the relative cost savings of applying the se…
▽ More
In this paper, we present a survey of research studies related to the cost-effectiveness of serverless approach and corresponding cost savings. We conducted a systematic literature review using Google Scholar search engine, covering the period from 2010 to 2024. We identified 34 related studies, from which we extracted 17 parameters that might influence the relative cost savings of applying the serverless approach.
△ Less
Submitted 6 June, 2025;
originally announced June 2025.
-
MedHELM: Holistic Evaluation of Large Language Models for Medical Tasks
Authors:
Suhana Bedi,
Hejie Cui,
Miguel Fuentes,
Alyssa Unell,
Michael Wornow,
Juan M. Banda,
Nikesh Kotecha,
Timothy Keyes,
Yifan Mai,
Mert Oez,
Hao Qiu,
Shrey Jain,
Leonardo Schettini,
Mehr Kashyap,
Jason Alan Fries,
Akshay Swaminathan,
Philip Chung,
Fateme Nateghi,
Asad Aali,
Ashwin Nayak,
Shivam Vedak,
Sneha S. Jain,
Birju Patel,
Oluseyi Fayanju,
Shreya Shah
, et al. (56 additional authors not shown)
Abstract:
While large language models (LLMs) achieve near-perfect scores on medical licensing exams, these evaluations inadequately reflect the complexity and diversity of real-world clinical practice. We introduce MedHELM, an extensible evaluation framework for assessing LLM performance for medical tasks with three key contributions. First, a clinician-validated taxonomy spanning 5 categories, 22 subcatego…
▽ More
While large language models (LLMs) achieve near-perfect scores on medical licensing exams, these evaluations inadequately reflect the complexity and diversity of real-world clinical practice. We introduce MedHELM, an extensible evaluation framework for assessing LLM performance for medical tasks with three key contributions. First, a clinician-validated taxonomy spanning 5 categories, 22 subcategories, and 121 tasks developed with 29 clinicians. Second, a comprehensive benchmark suite comprising 35 benchmarks (17 existing, 18 newly formulated) providing complete coverage of all categories and subcategories in the taxonomy. Third, a systematic comparison of LLMs with improved evaluation methods (using an LLM-jury) and a cost-performance analysis. Evaluation of 9 frontier LLMs, using the 35 benchmarks, revealed significant performance variation. Advanced reasoning models (DeepSeek R1: 66% win-rate; o3-mini: 64% win-rate) demonstrated superior performance, though Claude 3.5 Sonnet achieved comparable results at 40% lower estimated computational cost. On a normalized accuracy scale (0-1), most models performed strongly in Clinical Note Generation (0.73-0.85) and Patient Communication & Education (0.78-0.83), moderately in Medical Research Assistance (0.65-0.75), and generally lower in Clinical Decision Support (0.56-0.72) and Administration & Workflow (0.53-0.63). Our LLM-jury evaluation method achieved good agreement with clinician ratings (ICC = 0.47), surpassing both average clinician-clinician agreement (ICC = 0.43) and automated baselines including ROUGE-L (0.36) and BERTScore-F1 (0.44). Claude 3.5 Sonnet achieved comparable performance to top models at lower estimated cost. These findings highlight the importance of real-world, task-specific evaluation for medical use of LLMs and provides an open source framework to enable this.
△ Less
Submitted 2 June, 2025; v1 submitted 26 May, 2025;
originally announced May 2025.
-
Transformer-Based Extraction of Statutory Definitions from the U.S. Code
Authors:
Arpana Hosabettu,
Harsh Shah
Abstract:
Automatic extraction of definitions from legal texts is critical for enhancing the comprehension and clarity of complex legal corpora such as the United States Code (U.S.C.). We present an advanced NLP system leveraging transformer-based architectures to automatically extract defined terms, their definitions, and their scope from the U.S.C. We address the challenges of automatically identifying le…
▽ More
Automatic extraction of definitions from legal texts is critical for enhancing the comprehension and clarity of complex legal corpora such as the United States Code (U.S.C.). We present an advanced NLP system leveraging transformer-based architectures to automatically extract defined terms, their definitions, and their scope from the U.S.C. We address the challenges of automatically identifying legal definitions, extracting defined terms, and determining their scope within this complex corpus of over 200,000 pages of federal statutory law. Building upon previous feature-based machine learning methods, our updated model employs domain-specific transformers (Legal-BERT) fine-tuned specifically for statutory texts, significantly improving extraction accuracy. Our work implements a multi-stage pipeline that combines document structure analysis with state-of-the-art language models to process legal text from the XML version of the U.S. Code. Each paragraph is first classified using a fine-tuned legal domain BERT model to determine if it contains a definition. Our system then aggregates related paragraphs into coherent definitional units and applies a combination of attention mechanisms and rule-based patterns to extract defined terms and their jurisdictional scope. The definition extraction system is evaluated on multiple titles of the U.S. Code containing thousands of definitions, demonstrating significant improvements over previous approaches. Our best model achieves 96.8% precision and 98.9% recall (98.2% F1-score), substantially outperforming traditional machine learning classifiers. This work contributes to improving accessibility and understanding of legal information while establishing a foundation for downstream legal reasoning tasks.
△ Less
Submitted 22 April, 2025;
originally announced April 2025.
-
ForesightNav: Learning Scene Imagination for Efficient Exploration
Authors:
Hardik Shah,
Jiaxu Xing,
Nico Messikommer,
Boyang Sun,
Marc Pollefeys,
Davide Scaramuzza
Abstract:
Understanding how humans leverage prior knowledge to navigate unseen environments while making exploratory decisions is essential for developing autonomous robots with similar abilities. In this work, we propose ForesightNav, a novel exploration strategy inspired by human imagination and reasoning. Our approach equips robotic agents with the capability to predict contextual information, such as oc…
▽ More
Understanding how humans leverage prior knowledge to navigate unseen environments while making exploratory decisions is essential for developing autonomous robots with similar abilities. In this work, we propose ForesightNav, a novel exploration strategy inspired by human imagination and reasoning. Our approach equips robotic agents with the capability to predict contextual information, such as occupancy and semantic details, for unexplored regions. These predictions enable the robot to efficiently select meaningful long-term navigation goals, significantly enhancing exploration in unseen environments. We validate our imagination-based approach using the Structured3D dataset, demonstrating accurate occupancy prediction and superior performance in anticipating unseen scene geometry. Our experiments show that the imagination module improves exploration efficiency in unseen environments, achieving a 100% completion rate for PointNav and an SPL of 67% for ObjectNav on the Structured3D Validation split. These contributions demonstrate the power of imagination-driven reasoning for autonomous systems to enhance generalizable and efficient exploration.
△ Less
Submitted 5 May, 2025; v1 submitted 22 April, 2025;
originally announced April 2025.
-
Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models
Authors:
Samira Abnar,
Harshay Shah,
Dan Busbridge,
Alaaeldin Mohamed Elnouby Ali,
Josh Susskind,
Vimal Thilak
Abstract:
Scaling the capacity of language models has consistently proven to be a reliable approach for improving performance and unlocking new capabilities. Capacity can be primarily defined by two dimensions: the number of model parameters and the compute per example. While scaling typically involves increasing both, the precise interplay between these factors and their combined contribution to overall ca…
▽ More
Scaling the capacity of language models has consistently proven to be a reliable approach for improving performance and unlocking new capabilities. Capacity can be primarily defined by two dimensions: the number of model parameters and the compute per example. While scaling typically involves increasing both, the precise interplay between these factors and their combined contribution to overall capacity remains not fully understood. We explore this relationship in the context of sparse Mixture-of-Experts (MoEs), which allow scaling the number of parameters without proportionally increasing the FLOPs per example. We investigate how varying the sparsity level, i.e., the fraction of inactive parameters, impacts model's performance during pretraining and downstream few-shot evaluation. We find that under different constraints (e.g., parameter size and total training compute), there is an optimal level of sparsity that improves both training efficiency and model performance. These results provide a better understanding of the impact of sparsity in scaling laws for MoEs and complement existing works in this area, offering insights for designing more efficient architectures.
△ Less
Submitted 2 July, 2025; v1 submitted 21 January, 2025;
originally announced January 2025.
-
Distilling Large Language Models for Efficient Clinical Information Extraction
Authors:
Karthik S. Vedula,
Annika Gupta,
Akshay Swaminathan,
Ivan Lopez,
Suhana Bedi,
Nigam H. Shah
Abstract:
Large language models (LLMs) excel at clinical information extraction but their computational demands limit practical deployment. Knowledge distillation--the process of transferring knowledge from larger to smaller models--offers a potential solution. We evaluate the performance of distilled BERT models, which are approximately 1,000 times smaller than modern LLMs, for clinical named entity recogn…
▽ More
Large language models (LLMs) excel at clinical information extraction but their computational demands limit practical deployment. Knowledge distillation--the process of transferring knowledge from larger to smaller models--offers a potential solution. We evaluate the performance of distilled BERT models, which are approximately 1,000 times smaller than modern LLMs, for clinical named entity recognition (NER) tasks. We leveraged state-of-the-art LLMs (Gemini and OpenAI models) and medical ontologies (RxNorm and SNOMED) as teacher labelers for medication, disease, and symptom extraction. We applied our approach to over 3,300 clinical notes spanning five publicly available datasets, comparing distilled BERT models against both their teacher labelers and BERT models fine-tuned on human labels. External validation was conducted using clinical notes from the MedAlign dataset. For disease extraction, F1 scores were 0.82 (teacher model), 0.89 (BioBERT trained on human labels), and 0.84 (BioBERT-distilled). For medication, F1 scores were 0.84 (teacher model), 0.91 (BioBERT-human), and 0.87 (BioBERT-distilled). For symptoms: F1 score of 0.73 (teacher model) and 0.68 (BioBERT-distilled). Distilled BERT models had faster inference (12x, 4x, 8x faster than GPT-4o, o1-mini, and Gemini Flash respectively) and lower costs (85x, 101x, 2x cheaper than GPT-4o, o1-mini, and Gemini Flash respectively). On the external validation dataset, the distilled BERT model achieved F1 scores of 0.883 (medication), 0.726 (disease), and 0.699 (symptom). Distilled BERT models were up to 101x cheaper and 12x faster than state-of-the-art LLMs while achieving similar performance on NER tasks. Distillation offers a computationally efficient and scalable alternative to large LLMs for clinical information extraction.
△ Less
Submitted 20 December, 2024;
originally announced January 2025.
-
Context Clues: Evaluating Long Context Models for Clinical Prediction Tasks on EHRs
Authors:
Michael Wornow,
Suhana Bedi,
Miguel Angel Fuentes Hernandez,
Ethan Steinberg,
Jason Alan Fries,
Christopher Re,
Sanmi Koyejo,
Nigam H. Shah
Abstract:
Foundation Models (FMs) trained on Electronic Health Records (EHRs) have achieved state-of-the-art results on numerous clinical prediction tasks. However, most existing EHR FMs have context windows of <1k tokens. This prevents them from modeling full patient EHRs which can exceed 10k's of events. Recent advancements in subquadratic long-context architectures (e.g., Mamba) offer a promising solutio…
▽ More
Foundation Models (FMs) trained on Electronic Health Records (EHRs) have achieved state-of-the-art results on numerous clinical prediction tasks. However, most existing EHR FMs have context windows of <1k tokens. This prevents them from modeling full patient EHRs which can exceed 10k's of events. Recent advancements in subquadratic long-context architectures (e.g., Mamba) offer a promising solution. However, their application to EHR data has not been well-studied. We address this gap by presenting the first systematic evaluation of the effect of context length on modeling EHR data. We find that longer context models improve predictive performance -- our Mamba-based model surpasses the prior state-of-the-art on 9/14 tasks on the EHRSHOT prediction benchmark. For clinical applications, however, model performance alone is insufficient -- robustness to the unique properties of EHR is crucial. Thus, we also evaluate models across three previously underexplored properties of EHR data: (1) the prevalence of "copy-forwarded" diagnoses which creates artificial repetition of tokens within EHR sequences; (2) the irregular time intervals between EHR events which can lead to a wide range of timespans within a context window; and (3) the natural increase in disease complexity over time which makes later tokens in the EHR harder to predict than earlier ones. Stratifying our EHRSHOT results, we find that higher levels of each property correlate negatively with model performance, but that longer context models are more robust to more extreme levels of these properties. Our work highlights the potential for using long-context architectures to model EHR data, and offers a case study for identifying new challenges in modeling sequential data motivated by domains outside of natural language. We release our models and code at: https://github.com/som-shahlab/long_context_clues
△ Less
Submitted 18 March, 2025; v1 submitted 9 December, 2024;
originally announced December 2024.
-
On Scalable Design for User-Centric Multi-Modal Shared E-Mobility Systems using MILP and Modified Dijkstra's Algorithm
Authors:
Maqsood Hussain Shah,
Ji Li,
Mingming Liu
Abstract:
In the rapidly evolving landscape of urban transportation, shared e-mobility services have emerged as a sustainable solution to meet growing demand for flexible, eco-friendly travel. However, the existing literature lacks a comprehensive multi-modal optimization framework with focus on user preferences and real-world constraints. This paper presents a multi-modal optimization framework for shared…
▽ More
In the rapidly evolving landscape of urban transportation, shared e-mobility services have emerged as a sustainable solution to meet growing demand for flexible, eco-friendly travel. However, the existing literature lacks a comprehensive multi-modal optimization framework with focus on user preferences and real-world constraints. This paper presents a multi-modal optimization framework for shared e-mobility, with a particular focus on e-mobility hubs (e-hubs) with micromobility. We propose and evaluate two approaches: a mixed-integer linear programming (MILP) solution, complemented by a heuristic graph reduction technique to manage computational complexity in scenarios with limited e-hubs, achieving a computational advantage of 93%, 72%, and 47% for 20, 50, and 100 e-hubs, respectively. Additionally, the modified Dijkstra's algorithm offers a more scalable, real-time alternative for larger e-hub networks, with median execution times consistently around 53 ms, regardless of the number of e-hubs. Thorough experimental evaluation on real-world map and simulated traffic data of Dublin City Centre reveals that both methods seamlessly adapt to practical considerations and constraints such as multi-modality, user-preferences and state of charge for different e-mobility tools. While MILP offers greater flexibility for incorporating additional objectives and constraints, the modified Dijkstra's algorithm is better suited for large-scale, real-time applications due to its computational efficiency.
△ Less
Submitted 14 December, 2024;
originally announced December 2024.
-
Blotto on the Ballot: A Ballot Stuffing Blotto Game
Authors:
Harsh Shah,
Jayakrishnan Nair,
D Manjunath,
Narayan Mandayam
Abstract:
We consider the following Colonel Blotto game between parties $P_1$ and $P_A.$ $P_1$ deploys a non negative number of troops across $J$ battlefields, while $P_A$ chooses $K,$ $K < J,$ battlefields to remove all of $P_1$'s troops from the chosen battlefields. $P_1$ has the objective of maximizing the number of surviving troops while $P_A$ wants to minimize it. Drawing an analogy with ballot stuffin…
▽ More
We consider the following Colonel Blotto game between parties $P_1$ and $P_A.$ $P_1$ deploys a non negative number of troops across $J$ battlefields, while $P_A$ chooses $K,$ $K < J,$ battlefields to remove all of $P_1$'s troops from the chosen battlefields. $P_1$ has the objective of maximizing the number of surviving troops while $P_A$ wants to minimize it. Drawing an analogy with ballot stuffing by a party contesting an election and the countermeasures by the Election Commission to negate that, we call this the Ballot Stuffing Game. For this zero-sum resource allocation game, we obtain the set of Nash equilibria as a solution to a convex combinatorial optimization problem. We analyze this optimization problem and obtain insights into the several non trivial features of the equilibrium behavior. These features in turn allows to describe the structure of the solutions and efficient algorithms to obtain then. The model is described as ballot stuffing game in a plebiscite but has applications in security and auditing games. The results are extended to a parliamentary election model. Numerical examples illustrate applications of the game.
△ Less
Submitted 9 December, 2024;
originally announced December 2024.
-
Time-to-Event Pretraining for 3D Medical Imaging
Authors:
Zepeng Huo,
Jason Alan Fries,
Alejandro Lozano,
Jeya Maria Jose Valanarasu,
Ethan Steinberg,
Louis Blankemeier,
Akshay S. Chaudhari,
Curtis Langlotz,
Nigam H. Shah
Abstract:
With the rise of medical foundation models and the growing availability of imaging data, scalable pretraining techniques offer a promising way to identify imaging biomarkers predictive of future disease risk. While current self-supervised methods for 3D medical imaging models capture local structural features like organ morphology, they fail to link pixel biomarkers with long-term health outcomes…
▽ More
With the rise of medical foundation models and the growing availability of imaging data, scalable pretraining techniques offer a promising way to identify imaging biomarkers predictive of future disease risk. While current self-supervised methods for 3D medical imaging models capture local structural features like organ morphology, they fail to link pixel biomarkers with long-term health outcomes due to a missing context problem. Current approaches lack the temporal context necessary to identify biomarkers correlated with disease progression, as they rely on supervision derived only from images and concurrent text descriptions. To address this, we introduce time-to-event pretraining, a pretraining framework for 3D medical imaging models that leverages large-scale temporal supervision from paired, longitudinal electronic health records (EHRs). Using a dataset of 18,945 CT scans (4.2 million 2D images) and time-to-event distributions across thousands of EHR-derived tasks, our method improves outcome prediction, achieving an average AUROC increase of 23.7% and a 29.4% gain in Harrell's C-index across 8 benchmark tasks. Importantly, these gains are achieved without sacrificing diagnostic classification performance. This study lays the foundation for integrating longitudinal EHR and 3D imaging data to advance clinical risk prediction.
△ Less
Submitted 19 March, 2025; v1 submitted 14 November, 2024;
originally announced November 2024.
-
VERITAS-NLI : Validation and Extraction of Reliable Information Through Automated Scraping and Natural Language Inference
Authors:
Arjun Shah,
Hetansh Shah,
Vedica Bafna,
Charmi Khandor,
Sindhu Nair
Abstract:
In today's day and age where information is rapidly spread through online platforms, the rise of fake news poses an alarming threat to the integrity of public discourse, societal trust, and reputed news sources. Classical machine learning and Transformer-based models have been extensively studied for the task of fake news detection, however they are hampered by their reliance on training data and…
▽ More
In today's day and age where information is rapidly spread through online platforms, the rise of fake news poses an alarming threat to the integrity of public discourse, societal trust, and reputed news sources. Classical machine learning and Transformer-based models have been extensively studied for the task of fake news detection, however they are hampered by their reliance on training data and are unable to generalize on unseen headlines. To address these challenges, we propose our novel solution, leveraging web-scraping techniques and Natural Language Inference (NLI) models to retrieve external knowledge necessary for verifying the accuracy of a headline. Our system is evaluated on a diverse self-curated evaluation dataset spanning over multiple news channels and broad domains. Our best performing pipeline achieves an accuracy of 84.3% surpassing the best classical Machine Learning model by 33.3% and Bidirectional Encoder Representations from Transformers (BERT) by 31.0% . This highlights the efficacy of combining dynamic web-scraping with Natural Language Inference to find support for a claimed headline in the corresponding externally retrieved knowledge for the task of fake news detection.
△ Less
Submitted 12 October, 2024;
originally announced October 2024.
-
EDGE-Rec: Efficient and Data-Guided Edge Diffusion For Recommender Systems Graphs
Authors:
Utkarsh Priyam,
Hemit Shah,
Edoardo Botta
Abstract:
Most recommender systems research focuses on binary historical user-item interaction encodings to predict future interactions. User features, item features, and interaction strengths remain largely under-utilized in this space or only indirectly utilized, despite proving largely effective in large-scale production recommendation systems. We propose a new attention mechanism, loosely based on the p…
▽ More
Most recommender systems research focuses on binary historical user-item interaction encodings to predict future interactions. User features, item features, and interaction strengths remain largely under-utilized in this space or only indirectly utilized, despite proving largely effective in large-scale production recommendation systems. We propose a new attention mechanism, loosely based on the principles of collaborative filtering, called Row-Column Separable Attention RCSA to take advantage of real-valued interaction weights as well as user and item features directly. Building on this mechanism, we additionally propose a novel Graph Diffusion Transformer GDiT architecture which is trained to iteratively denoise the weighted interaction matrix of the user-item interaction graph directly. The weighted interaction matrix is built from the bipartite structure of the user-item interaction graph and corresponding edge weights derived from user-item rating interactions. Inspired by the recent progress in text-conditioned image generation, our method directly produces user-item rating predictions on the same scale as the original ratings by conditioning the denoising process on user and item features with a principled approach.
△ Less
Submitted 22 September, 2024;
originally announced September 2024.
-
meds_reader: A fast and efficient EHR processing library
Authors:
Ethan Steinberg,
Michael Wornow,
Suhana Bedi,
Jason Alan Fries,
Matthew B. A. McDermott,
Nigam H. Shah
Abstract:
The growing demand for machine learning in healthcare requires processing increasingly large electronic health record (EHR) datasets, but existing pipelines are not computationally efficient or scalable. In this paper, we introduce meds_reader, an optimized Python package for efficient EHR data processing that is designed to take advantage of many intrinsic properties of EHR data for improved spee…
▽ More
The growing demand for machine learning in healthcare requires processing increasingly large electronic health record (EHR) datasets, but existing pipelines are not computationally efficient or scalable. In this paper, we introduce meds_reader, an optimized Python package for efficient EHR data processing that is designed to take advantage of many intrinsic properties of EHR data for improved speed. We then demonstrate the benefits of meds_reader by reimplementing key components of two major EHR processing pipelines, achieving 10-100x improvements in memory, speed, and disk usage. The code for meds_reader can be found at https://github.com/som-shahlab/meds_reader.
△ Less
Submitted 14 November, 2024; v1 submitted 12 September, 2024;
originally announced September 2024.
-
ContextCite: Attributing Model Generation to Context
Authors:
Benjamin Cohen-Wang,
Harshay Shah,
Kristian Georgiev,
Aleksander Madry
Abstract:
How do language models use information provided as context when generating a response? Can we infer whether a particular generated statement is actually grounded in the context, a misinterpretation, or fabricated? To help answer these questions, we introduce the problem of context attribution: pinpointing the parts of the context (if any) that led a model to generate a particular statement. We the…
▽ More
How do language models use information provided as context when generating a response? Can we infer whether a particular generated statement is actually grounded in the context, a misinterpretation, or fabricated? To help answer these questions, we introduce the problem of context attribution: pinpointing the parts of the context (if any) that led a model to generate a particular statement. We then present ContextCite, a simple and scalable method for context attribution that can be applied on top of any existing language model. Finally, we showcase the utility of ContextCite through three applications: (1) helping verify generated statements (2) improving response quality by pruning the context and (3) detecting poisoning attacks. We provide code for ContextCite at https://github.com/MadryLab/context-cite.
△ Less
Submitted 13 September, 2024; v1 submitted 1 September, 2024;
originally announced September 2024.
-
q-exponential family for policy optimization
Authors:
Lingwei Zhu,
Haseeb Shah,
Han Wang,
Yukie Nagai,
Martha White
Abstract:
Policy optimization methods benefit from a simple and tractable policy parametrization, usually the Gaussian for continuous action spaces. In this paper, we consider a broader policy family that remains tractable: the $q$-exponential family. This family of policies is flexible, allowing the specification of both heavy-tailed policies ($q>1$) and light-tailed policies ($q<1$). This paper examines t…
▽ More
Policy optimization methods benefit from a simple and tractable policy parametrization, usually the Gaussian for continuous action spaces. In this paper, we consider a broader policy family that remains tractable: the $q$-exponential family. This family of policies is flexible, allowing the specification of both heavy-tailed policies ($q>1$) and light-tailed policies ($q<1$). This paper examines the interplay between $q$-exponential policies for several actor-critic algorithms conducted on both online and offline problems. We find that heavy-tailed policies are more effective in general and can consistently improve on Gaussian. In particular, we find the Student's t-distribution to be more stable than the Gaussian across settings and that a heavy-tailed $q$-Gaussian for Tsallis Advantage Weighted Actor-Critic consistently performs well in offline benchmark problems. Our code is available at \url{https://github.com/lingweizhu/qexp}.
△ Less
Submitted 24 January, 2025; v1 submitted 13 August, 2024;
originally announced August 2024.
-
Answering real-world clinical questions using large language model based systems
Authors:
Yen Sia Low,
Michael L. Jackson,
Rebecca J. Hyde,
Robert E. Brown,
Neil M. Sanghavi,
Julian D. Baldwin,
C. William Pike,
Jananee Muralidharan,
Gavin Hui,
Natasha Alexander,
Hadeel Hassan,
Rahul V. Nene,
Morgan Pike,
Courtney J. Pokrzywa,
Shivam Vedak,
Adam Paul Yan,
Dong-han Yao,
Amy R. Zipursky,
Christina Dinh,
Philip Ballentine,
Dan C. Derieg,
Vladimir Polony,
Rehan N. Chawdry,
Jordan Davies,
Brigham B. Hyde
, et al. (2 additional authors not shown)
Abstract:
Evidence to guide healthcare decisions is often limited by a lack of relevant and trustworthy literature as well as difficulty in contextualizing existing research for a specific patient. Large language models (LLMs) could potentially address both challenges by either summarizing published literature or generating new studies based on real-world data (RWD). We evaluated the ability of five LLM-bas…
▽ More
Evidence to guide healthcare decisions is often limited by a lack of relevant and trustworthy literature as well as difficulty in contextualizing existing research for a specific patient. Large language models (LLMs) could potentially address both challenges by either summarizing published literature or generating new studies based on real-world data (RWD). We evaluated the ability of five LLM-based systems in answering 50 clinical questions and had nine independent physicians review the responses for relevance, reliability, and actionability. As it stands, general-purpose LLMs (ChatGPT-4, Claude 3 Opus, Gemini Pro 1.5) rarely produced answers that were deemed relevant and evidence-based (2% - 10%). In contrast, retrieval augmented generation (RAG)-based and agentic LLM systems produced relevant and evidence-based answers for 24% (OpenEvidence) to 58% (ChatRWD) of questions. Only the agentic ChatRWD was able to answer novel questions compared to other LLMs (65% vs. 0-9%). These results suggest that while general-purpose LLMs should not be used as-is, a purpose-built system for evidence summarization based on RAG and one for generating novel evidence working synergistically would improve availability of pertinent evidence for patient care.
△ Less
Submitted 29 June, 2024;
originally announced July 2024.
-
WONDERBREAD: A Benchmark for Evaluating Multimodal Foundation Models on Business Process Management Tasks
Authors:
Michael Wornow,
Avanika Narayan,
Ben Viggiano,
Ishan S. Khare,
Tathagat Verma,
Tibor Thompson,
Miguel Angel Fuentes Hernandez,
Sudharsan Sundar,
Chloe Trujillo,
Krrish Chawla,
Rongfei Lu,
Justin Shen,
Divya Nagaraj,
Joshua Martinez,
Vardhan Agrawal,
Althea Hudson,
Nigam H. Shah,
Christopher Re
Abstract:
Existing ML benchmarks lack the depth and diversity of annotations needed for evaluating models on business process management (BPM) tasks. BPM is the practice of documenting, measuring, improving, and automating enterprise workflows. However, research has focused almost exclusively on one task - full end-to-end automation using agents based on multimodal foundation models (FMs) like GPT-4. This f…
▽ More
Existing ML benchmarks lack the depth and diversity of annotations needed for evaluating models on business process management (BPM) tasks. BPM is the practice of documenting, measuring, improving, and automating enterprise workflows. However, research has focused almost exclusively on one task - full end-to-end automation using agents based on multimodal foundation models (FMs) like GPT-4. This focus on automation ignores the reality of how most BPM tools are applied today - simply documenting the relevant workflow takes 60% of the time of the typical process optimization project. To address this gap we present WONDERBREAD, the first benchmark for evaluating multimodal FMs on BPM tasks beyond automation. Our contributions are: (1) a dataset containing 2928 documented workflow demonstrations; (2) 6 novel BPM tasks sourced from real-world applications ranging from workflow documentation to knowledge transfer to process improvement; and (3) an automated evaluation harness. Our benchmark shows that while state-of-the-art FMs can automatically generate documentation (e.g. recalling 88% of the steps taken in a video demonstration of a workflow), they struggle to re-apply that knowledge towards finer-grained validation of workflow completion (F1 < 0.3). We hope WONDERBREAD encourages the development of more "human-centered" AI tooling for enterprise applications and furthers the exploration of multimodal FMs for the broader universe of BPM tasks. We publish our dataset and experiments here: https://github.com/HazyResearch/wonderbread
△ Less
Submitted 10 October, 2024; v1 submitted 19 June, 2024;
originally announced June 2024.
-
Merlin: A Vision Language Foundation Model for 3D Computed Tomography
Authors:
Louis Blankemeier,
Joseph Paul Cohen,
Ashwin Kumar,
Dave Van Veen,
Syed Jamal Safdar Gardezi,
Magdalini Paschali,
Zhihong Chen,
Jean-Benoit Delbrouck,
Eduardo Reis,
Cesar Truyts,
Christian Bluethgen,
Malte Engmann Kjeldskov Jensen,
Sophie Ostmeier,
Maya Varma,
Jeya Maria Jose Valanarasu,
Zhongnan Fang,
Zepeng Huo,
Zaid Nabulsi,
Diego Ardila,
Wei-Hung Weng,
Edson Amaro Junior,
Neera Ahuja,
Jason Fries,
Nigam H. Shah,
Andrew Johnston
, et al. (6 additional authors not shown)
Abstract:
Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current radiologist shortage, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies. Prior state-of-the-art approaches for automated medical image interpretation leverage vision la…
▽ More
Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current radiologist shortage, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies. Prior state-of-the-art approaches for automated medical image interpretation leverage vision language models (VLMs). However, current medical VLMs are generally limited to 2D images and short reports, and do not leverage electronic health record (EHR) data for supervision. We introduce Merlin - a 3D VLM that we train using paired CT scans (6+ million images from 15,331 CTs), EHR diagnosis codes (1.8+ million codes), and radiology reports (6+ million tokens). We evaluate Merlin on 6 task types and 752 individual tasks. The non-adapted (off-the-shelf) tasks include zero-shot findings classification (31 findings), phenotype classification (692 phenotypes), and zero-shot cross-modal retrieval (image to findings and image to impressions), while model adapted tasks include 5-year disease prediction (6 diseases), radiology report generation, and 3D semantic segmentation (20 organs). We perform internal validation on a test set of 5,137 CTs, and external validation on 7,000 clinical CTs and on two public CT datasets (VerSe, TotalSegmentator). Beyond these clinically-relevant evaluations, we assess the efficacy of various network architectures and training strategies to depict that Merlin has favorable performance to existing task-specific baselines. We derive data scaling laws to empirically assess training data needs for requisite downstream task performance. Furthermore, unlike conventional VLMs that require hundreds of GPUs for training, we perform all training on a single GPU.
△ Less
Submitted 10 June, 2024;
originally announced June 2024.
-
Automating the Enterprise with Foundation Models
Authors:
Michael Wornow,
Avanika Narayan,
Krista Opsahl-Ong,
Quinn McIntyre,
Nigam H. Shah,
Christopher Re
Abstract:
Automating enterprise workflows could unlock $4 trillion/year in productivity gains. Despite being of interest to the data management community for decades, the ultimate vision of end-to-end workflow automation has remained elusive. Current solutions rely on process mining and robotic process automation (RPA), in which a bot is hard-coded to follow a set of predefined rules for completing a workfl…
▽ More
Automating enterprise workflows could unlock $4 trillion/year in productivity gains. Despite being of interest to the data management community for decades, the ultimate vision of end-to-end workflow automation has remained elusive. Current solutions rely on process mining and robotic process automation (RPA), in which a bot is hard-coded to follow a set of predefined rules for completing a workflow. Through case studies of a hospital and large B2B enterprise, we find that the adoption of RPA has been inhibited by high set-up costs (12-18 months), unreliable execution (60% initial accuracy), and burdensome maintenance (requiring multiple FTEs). Multimodal foundation models (FMs) such as GPT-4 offer a promising new approach for end-to-end workflow automation given their generalized reasoning and planning abilities. To study these capabilities we propose ECLAIR, a system to automate enterprise workflows with minimal human supervision. We conduct initial experiments showing that multimodal FMs can address the limitations of traditional RPA with (1) near-human-level understanding of workflows (93% accuracy on a workflow understanding task) and (2) instant set-up with minimal technical barrier (based solely on a natural language description of a workflow, ECLAIR achieves end-to-end completion rates of 40%). We identify human-AI collaboration, validation, and self-improvement as open challenges, and suggest ways they can be solved with data management techniques. Code is available at: https://github.com/HazyResearch/eclair-agents
△ Less
Submitted 3 May, 2024;
originally announced May 2024.
-
Decomposing and Editing Predictions by Modeling Model Computation
Authors:
Harshay Shah,
Andrew Ilyas,
Aleksander Madry
Abstract:
How does the internal computation of a machine learning model transform inputs into predictions? In this paper, we introduce a task called component modeling that aims to address this question. The goal of component modeling is to decompose an ML model's prediction in terms of its components -- simple functions (e.g., convolution filters, attention heads) that are the "building blocks" of model co…
▽ More
How does the internal computation of a machine learning model transform inputs into predictions? In this paper, we introduce a task called component modeling that aims to address this question. The goal of component modeling is to decompose an ML model's prediction in terms of its components -- simple functions (e.g., convolution filters, attention heads) that are the "building blocks" of model computation. We focus on a special case of this task, component attribution, where the goal is to estimate the counterfactual impact of individual components on a given prediction. We then present COAR, a scalable algorithm for estimating component attributions; we demonstrate its effectiveness across models, datasets, and modalities. Finally, we show that component attributions estimated with COAR directly enable model editing across five tasks, namely: fixing model errors, ``forgetting'' specific classes, boosting subpopulation robustness, localizing backdoor attacks, and improving robustness to typographic attacks. We provide code for COAR at https://github.com/MadryLab/modelcomponents .
△ Less
Submitted 17 April, 2024;
originally announced April 2024.
-
Enhancing Financial Data Visualization for Investment Decision-Making
Authors:
Nisarg Patel,
Harmit Shah,
Kishan Mewada
Abstract:
Navigating the intricate landscape of financial markets requires adept forecasting of stock price movements. This paper delves into the potential of Long Short-Term Memory (LSTM) networks for predicting stock dynamics, with a focus on discerning nuanced rise and fall patterns. Leveraging a dataset from the New York Stock Exchange (NYSE), the study incorporates multiple features to enhance LSTM's c…
▽ More
Navigating the intricate landscape of financial markets requires adept forecasting of stock price movements. This paper delves into the potential of Long Short-Term Memory (LSTM) networks for predicting stock dynamics, with a focus on discerning nuanced rise and fall patterns. Leveraging a dataset from the New York Stock Exchange (NYSE), the study incorporates multiple features to enhance LSTM's capacity in capturing complex patterns. Visualization of key attributes, such as opening, closing, low, and high prices, aids in unraveling subtle distinctions crucial for comprehensive market understanding. The meticulously crafted LSTM input structure, inspired by established guidelines, incorporates both price and volume attributes over a 25-day time step, enabling the model to capture temporal intricacies. A comprehensive methodology, including hyperparameter tuning with Grid Search, Early Stopping, and Callback mechanisms, leads to a remarkable 53% improvement in predictive accuracy. The study concludes with insights into model robustness, contributions to financial forecasting literature, and a roadmap for real-time stock market prediction. The amalgamation of LSTM networks, strategic hyperparameter tuning, and informed feature selection presents a potent framework for advancing the accuracy of stock price predictions, contributing substantively to financial time series forecasting discourse.
△ Less
Submitted 9 December, 2023;
originally announced March 2024.
-
Data-driven Energy Consumption Modelling for Electric Micromobility using an Open Dataset
Authors:
Yue Ding,
Sen Yan,
Maqsood Hussain Shah,
Hongyuan Fang,
Ji Li,
Mingming Liu
Abstract:
The escalating challenges of traffic congestion and environmental degradation underscore the critical importance of embracing E-Mobility solutions in urban spaces. In particular, micro E-Mobility tools such as E-scooters and E-bikes, play a pivotal role in this transition, offering sustainable alternatives for urban commuters. However, the energy consumption patterns for these tools are a critical…
▽ More
The escalating challenges of traffic congestion and environmental degradation underscore the critical importance of embracing E-Mobility solutions in urban spaces. In particular, micro E-Mobility tools such as E-scooters and E-bikes, play a pivotal role in this transition, offering sustainable alternatives for urban commuters. However, the energy consumption patterns for these tools are a critical aspect that impacts their effectiveness in real-world scenarios and is essential for trip planning and boosting user confidence in using these. To this effect, recent studies have utilised physical models customised for specific mobility tools and conditions, but these models struggle with generalization and effectiveness in real-world scenarios due to a notable absence of open datasets for thorough model evaluation and verification. To fill this gap, our work presents an open dataset, collected in Dublin, Ireland, specifically designed for energy modelling research related to E-Scooters and E-Bikes. Furthermore, we provide a comprehensive analysis of energy consumption modelling based on the dataset using a set of representative machine learning algorithms and compare their performance against the contemporary mathematical models as a baseline. Our results demonstrate a notable advantage for data-driven models in comparison to the corresponding mathematical models for estimating energy consumption. Specifically, data-driven models outperform physical models in accuracy by up to 83.83% for E-Bikes and 82.16% for E-Scooters based on an in-depth analysis of the dataset under certain assumptions.
△ Less
Submitted 8 November, 2024; v1 submitted 26 March, 2024;
originally announced March 2024.
-
Optimal Design and Implementation of an Open-source Emulation Platform for User-Centric Shared E-mobility Services
Authors:
Maqsood Hussain Shah,
Yue Ding,
Shaoshu Zhu,
Yingqi Gu,
Mingming Liu
Abstract:
With the rising concern over transportation emissions and pollution on a global scale, shared electric mobility services like E-cars, E-bikes, and E-scooters have emerged as promising solutions to mitigate these pressing challenges. However, existing shared E-mobility services exhibit critical design deficiencies, including insufficient service integration, imprecise energy consumption forecasting…
▽ More
With the rising concern over transportation emissions and pollution on a global scale, shared electric mobility services like E-cars, E-bikes, and E-scooters have emerged as promising solutions to mitigate these pressing challenges. However, existing shared E-mobility services exhibit critical design deficiencies, including insufficient service integration, imprecise energy consumption forecasting, limited scalability and geographical coverage, and a notable absence of a user-centric perspective, particularly in the context of multi-modal transportation. More importantly, there is no consolidated open-source platform which could benefit the E-mobility research community. This paper aims to bridge this gap by providing an open-source platform for shared E-mobility. The proposed platform, with an agent-in-the-loop approach and modular architecture, is tailored to diverse user preferences and offers enhanced customization. We demonstrate the viability of this platform by providing a comprehensive analysis for integrated multi-modal route-optimization in diverse scenarios of energy availability, user preferences and E-mobility tools placement for which we use modified Ant Colony Optimization algorithm so called Multi-Model Energy Constrained ACO (MMEC-ACO) and Q-Learning algorithms. Our findings demonstrate that Q-learning achieves significantly better performance in terms of travel time cost for more than 90\% of the instances as compared to MMEC-ACO for different scenarios including energy availability, user preference and E-mobility tools distribution. For a fixed (O, D) pair, the average execution time to achieve optimal time cost solution for MMEC-ACO is less than 2 seconds, while Q-learning reaches an optimal time cost in 20 seconds on average. For a run-time of 2 seconds, Q-learning still achieves a better optimal time cost with a 20\% reduction over MMEC-ACO's time cost.
△ Less
Submitted 1 July, 2024; v1 submitted 12 March, 2024;
originally announced March 2024.
-
Standing on FURM ground -- A framework for evaluating Fair, Useful, and Reliable AI Models in healthcare systems
Authors:
Alison Callahan,
Duncan McElfresh,
Juan M. Banda,
Gabrielle Bunney,
Danton Char,
Jonathan Chen,
Conor K. Corbin,
Debadutta Dash,
Norman L. Downing,
Sneha S. Jain,
Nikesh Kotecha,
Jonathan Masterson,
Michelle M. Mello,
Keith Morse,
Srikar Nallan,
Abby Pandya,
Anurang Revri,
Aditya Sharma,
Christopher Sharp,
Rahul Thapa,
Michael Wornow,
Alaa Youssef,
Michael A. Pfeffer,
Nigam H. Shah
Abstract:
The impact of using artificial intelligence (AI) to guide patient care or operational processes is an interplay of the AI model's output, the decision-making protocol based on that output, and the capacity of the stakeholders involved to take the necessary subsequent action. Estimating the effects of this interplay before deployment, and studying it in real time afterwards, are essential to bridge…
▽ More
The impact of using artificial intelligence (AI) to guide patient care or operational processes is an interplay of the AI model's output, the decision-making protocol based on that output, and the capacity of the stakeholders involved to take the necessary subsequent action. Estimating the effects of this interplay before deployment, and studying it in real time afterwards, are essential to bridge the chasm between AI model development and achievable benefit. To accomplish this, the Data Science team at Stanford Health Care has developed a Testing and Evaluation (T&E) mechanism to identify fair, useful and reliable AI models (FURM) by conducting an ethical review to identify potential value mismatches, simulations to estimate usefulness, financial projections to assess sustainability, as well as analyses to determine IT feasibility, design a deployment strategy, and recommend a prospective monitoring and evaluation plan. We report on FURM assessments done to evaluate six AI guided solutions for potential adoption, spanning clinical and operational settings, each with the potential to impact from several dozen to tens of thousands of patients each year. We describe the assessment process, summarize the six assessments, and share our framework to enable others to conduct similar assessments. Of the six solutions we assessed, two have moved into a planning and implementation phase. Our novel contributions - usefulness estimates by simulation, financial projections to quantify sustainability, and a process to do ethical assessments - as well as their underlying methods and open source tools, are available for other healthcare systems to conduct actionable evaluations of candidate AI solutions.
△ Less
Submitted 14 March, 2024; v1 submitted 26 February, 2024;
originally announced March 2024.
-
GNSS Positioning using Cost Function Regulated Multilateration and Graph Neural Networks
Authors:
Amir Jalalirad,
Davide Belli,
Bence Major,
Songwon Jee,
Himanshu Shah,
Will Morrison
Abstract:
In urban environments, where line-of-sight signals from GNSS satellites are frequently blocked by high-rise objects, GNSS receivers are subject to large errors in measuring satellite ranges. Heuristic methods are commonly used to estimate these errors and reduce the impact of noisy measurements on localization accuracy. In our work, we replace these error estimation heuristics with a deep learning…
▽ More
In urban environments, where line-of-sight signals from GNSS satellites are frequently blocked by high-rise objects, GNSS receivers are subject to large errors in measuring satellite ranges. Heuristic methods are commonly used to estimate these errors and reduce the impact of noisy measurements on localization accuracy. In our work, we replace these error estimation heuristics with a deep learning model based on Graph Neural Networks. Additionally, by analyzing the cost function of the multilateration process, we derive an optimal method to utilize the estimated errors. Our approach guarantees that the multilateration converges to the receiver's location as the error estimation accuracy increases. We evaluate our solution on a real-world dataset containing more than 100k GNSS epochs, collected from multiple cities with diverse characteristics. The empirical results show improvements from 40% to 80% in the horizontal localization error against recent deep learning baselines as well as classical localization approaches.
△ Less
Submitted 28 February, 2024;
originally announced February 2024.
-
Zero-Shot Clinical Trial Patient Matching with LLMs
Authors:
Michael Wornow,
Alejandro Lozano,
Dev Dash,
Jenelle Jindal,
Kenneth W. Mahaffey,
Nigam H. Shah
Abstract:
Matching patients to clinical trials is a key unsolved challenge in bringing new drugs to market. Today, identifying patients who meet a trial's eligibility criteria is highly manual, taking up to 1 hour per patient. Automated screening is challenging, however, as it requires understanding unstructured clinical text. Large language models (LLMs) offer a promising solution. In this work, we explore…
▽ More
Matching patients to clinical trials is a key unsolved challenge in bringing new drugs to market. Today, identifying patients who meet a trial's eligibility criteria is highly manual, taking up to 1 hour per patient. Automated screening is challenging, however, as it requires understanding unstructured clinical text. Large language models (LLMs) offer a promising solution. In this work, we explore their application to trial matching. First, we design an LLM-based system which, given a patient's medical history as unstructured clinical text, evaluates whether that patient meets a set of inclusion criteria (also specified as free text). Our zero-shot system achieves state-of-the-art scores on the n2c2 2018 cohort selection benchmark. Second, we improve the data and cost efficiency of our method by identifying a prompting strategy which matches patients an order of magnitude faster and more cheaply than the status quo, and develop a two-stage retrieval pipeline that reduces the number of tokens processed by up to a third while retaining high performance. Third, we evaluate the interpretability of our system by having clinicians evaluate the natural language justifications generated by the LLM for each eligibility decision, and show that it can output coherent explanations for 97% of its correct decisions and 75% of its incorrect ones. Our results establish the feasibility of using LLMs to accelerate clinical trial operations.
△ Less
Submitted 10 April, 2024; v1 submitted 4 February, 2024;
originally announced February 2024.
-
Homogenization Effects of Large Language Models on Human Creative Ideation
Authors:
Barrett R. Anderson,
Jash Hemant Shah,
Max Kreminski
Abstract:
Large language models (LLMs) are now being used in a wide variety of contexts, including as creativity support tools (CSTs) intended to help their users come up with new ideas. But do LLMs actually support user creativity? We hypothesized that the use of an LLM as a CST might make the LLM's users feel more creative, and even broaden the range of ideas suggested by each individual user, but also ho…
▽ More
Large language models (LLMs) are now being used in a wide variety of contexts, including as creativity support tools (CSTs) intended to help their users come up with new ideas. But do LLMs actually support user creativity? We hypothesized that the use of an LLM as a CST might make the LLM's users feel more creative, and even broaden the range of ideas suggested by each individual user, but also homogenize the ideas suggested by different users. We conducted a 36-participant comparative user study and found, in accordance with the homogenization hypothesis, that different users tended to produce less semantically distinct ideas with ChatGPT than with an alternative CST. Additionally, ChatGPT users generated a greater number of more detailed ideas, but felt less responsible for the ideas they generated. We discuss potential implications of these findings for users, designers, and developers of LLM-based CSTs.
△ Less
Submitted 10 May, 2024; v1 submitted 2 February, 2024;
originally announced February 2024.
-
Enhancing End-to-End Multi-Task Dialogue Systems: A Study on Intrinsic Motivation Reinforcement Learning Algorithms for Improved Training and Adaptability
Authors:
Navin Kamuni,
Hardik Shah,
Sathishkumar Chintala,
Naveen Kunchakuri,
Sujatha Alla Old Dominion
Abstract:
End-to-end multi-task dialogue systems are usually designed with separate modules for the dialogue pipeline. Among these, the policy module is essential for deciding what to do in response to user input. This policy is trained by reinforcement learning algorithms by taking advantage of an environment in which an agent receives feedback in the form of a reward signal. The current dialogue systems,…
▽ More
End-to-end multi-task dialogue systems are usually designed with separate modules for the dialogue pipeline. Among these, the policy module is essential for deciding what to do in response to user input. This policy is trained by reinforcement learning algorithms by taking advantage of an environment in which an agent receives feedback in the form of a reward signal. The current dialogue systems, however, only provide meagre and simplistic rewards. Investigating intrinsic motivation reinforcement learning algorithms is the goal of this study. Through this, the agent can quickly accelerate training and improve its capacity to judge the quality of its actions by teaching it an internal incentive system. In particular, we adapt techniques for random network distillation and curiosity-driven reinforcement learning to measure the frequency of state visits and encourage exploration by using semantic similarity between utterances. Experimental results on MultiWOZ, a heterogeneous dataset, show that intrinsic motivation-based debate systems outperform policies that depend on extrinsic incentives. By adopting random network distillation, for example, which is trained using semantic similarity between user-system dialogues, an astounding average success rate of 73% is achieved. This is a significant improvement over the baseline Proximal Policy Optimization (PPO), which has an average success rate of 60%. In addition, performance indicators such as booking rates and completion rates show a 10% rise over the baseline. Furthermore, these intrinsic incentive models help improve the system's policy's resilience in an increasing amount of domains. This implies that they could be useful in scaling up to settings that cover a wider range of domains.
△ Less
Submitted 25 March, 2024; v1 submitted 31 January, 2024;
originally announced January 2024.
-
Analysis and Validation of Image Search Engines in Histopathology
Authors:
Isaiah Lahr,
Saghir Alfasly,
Peyman Nejat,
Jibran Khan,
Luke Kottom,
Vaishnavi Kumbhar,
Areej Alsaafin,
Abubakr Shafique,
Sobhan Hemati,
Ghazal Alabtah,
Nneka Comfere,
Dennis Murphee,
Aaron Mangold,
Saba Yasir,
Chady Meroueh,
Lisa Boardman,
Vijay H. Shah,
Joaquin J. Garcia,
H. R. Tizhoosh
Abstract:
Searching for similar images in archives of histology and histopathology images is a crucial task that may aid in patient matching for various purposes, ranging from triaging and diagnosis to prognosis and prediction. Whole slide images (WSIs) are highly detailed digital representations of tissue specimens mounted on glass slides. Matching WSI to WSI can serve as the critical method for patient ma…
▽ More
Searching for similar images in archives of histology and histopathology images is a crucial task that may aid in patient matching for various purposes, ranging from triaging and diagnosis to prognosis and prediction. Whole slide images (WSIs) are highly detailed digital representations of tissue specimens mounted on glass slides. Matching WSI to WSI can serve as the critical method for patient matching. In this paper, we report extensive analysis and validation of four search methods bag of visual words (BoVW), Yottixel, SISH, RetCCL, and some of their potential variants. We analyze their algorithms and structures and assess their performance. For this evaluation, we utilized four internal datasets ($1269$ patients) and three public datasets ($1207$ patients), totaling more than $200,000$ patches from $38$ different classes/subtypes across five primary sites. Certain search engines, for example, BoVW, exhibit notable efficiency and speed but suffer from low accuracy. Conversely, search engines like Yottixel demonstrate efficiency and speed, providing moderately accurate results. Recent proposals, including SISH, display inefficiency and yield inconsistent outcomes, while alternatives like RetCCL prove inadequate in both accuracy and efficiency. Further research is imperative to address the dual aspects of accuracy and minimal storage requirements in histopathological image search.
△ Less
Submitted 8 June, 2024; v1 submitted 6 January, 2024;
originally announced January 2024.
-
Advancing Web Accessibility -- A guide to transitioning Design Systems from WCAG 2.0 to WCAG 2.1
Authors:
Hardik Shah
Abstract:
This research focuses on the critical process of upgrading a Design System from Web Content Accessibility Guidelines (WCAG) 2.0 to WCAG 2.1, which is an essential step in enhancing web accessibility. It emphasizes the importance of staying up to date on increasing accessibility requirements, as well as the critical function of Design Systems in supporting inclusion in digital environments. The art…
▽ More
This research focuses on the critical process of upgrading a Design System from Web Content Accessibility Guidelines (WCAG) 2.0 to WCAG 2.1, which is an essential step in enhancing web accessibility. It emphasizes the importance of staying up to date on increasing accessibility requirements, as well as the critical function of Design Systems in supporting inclusion in digital environments. The article lays out a complete strategy for meeting WCAG 2.1 compliance. Assessment, strategic planning, implementation, and testing are all part of this strategy. The need for collaboration and user involvement is emphasized as critical strategies and best practices for a successful migration journey. In addition, the article digs into migration barriers and discusses significant lessons acquired, offering a realistic view of the intricacies of this transforming road. Finally, it is a practical guide and a necessary resource for organizations committed to accessible and user-centered design. The document provides them with the knowledge and resources they need to navigate the changing world of web accessibility properly.
△ Less
Submitted 28 November, 2023;
originally announced December 2023.
-
GVFs in the Real World: Making Predictions Online for Water Treatment
Authors:
Muhammad Kamran Janjua,
Haseeb Shah,
Martha White,
Erfan Miahi,
Marlos C. Machado,
Adam White
Abstract:
In this paper we investigate the use of reinforcement-learning based prediction approaches for a real drinking-water treatment plant. Developing such a prediction system is a critical step on the path to optimizing and automating water treatment. Before that, there are many questions to answer about the predictability of the data, suitable neural network architectures, how to overcome partial obse…
▽ More
In this paper we investigate the use of reinforcement-learning based prediction approaches for a real drinking-water treatment plant. Developing such a prediction system is a critical step on the path to optimizing and automating water treatment. Before that, there are many questions to answer about the predictability of the data, suitable neural network architectures, how to overcome partial observability and more. We first describe this dataset, and highlight challenges with seasonality, nonstationarity, partial observability, and heterogeneity across sensors and operation modes of the plant. We then describe General Value Function (GVF) predictions -- discounted cumulative sums of observations -- and highlight why they might be preferable to classical n-step predictions common in time series prediction. We discuss how to use offline data to appropriately pre-train our temporal difference learning (TD) agents that learn these GVF predictions, including how to select hyperparameters for online fine-tuning in deployment. We find that the TD-prediction agent obtains an overall lower normalized mean-squared error than the n-step prediction agent. Finally, we show the importance of learning in deployment, by comparing a TD agent trained purely offline with no online updating to a TD agent that learns online. This final result is one of the first to motivate the importance of adapting predictions in real-time, for non-stationary high-volume systems in the real world.
△ Less
Submitted 3 December, 2023;
originally announced December 2023.
-
Harnessing customized built-in elements -- Empowering Component-Based Software Engineering and Design Systems with HTML5 Web Components
Authors:
Hardik Shah
Abstract:
Customized built-in elements in HTML5 significantly transform web development. These elements enable developers to create unique HTML components tailored with specific design and purpose. Customized built-in elements enable developers to address the unique needs of web applications more quickly, supporting consistent user interfaces and experiences across diverse digital platforms. This study inve…
▽ More
Customized built-in elements in HTML5 significantly transform web development. These elements enable developers to create unique HTML components tailored with specific design and purpose. Customized built-in elements enable developers to address the unique needs of web applications more quickly, supporting consistent user interfaces and experiences across diverse digital platforms. This study investigates the role of these features in Component-Based Software Engineering (CBSE) and Design Systems, emphasizing the benefits of code modularity, reusability, and scalability in web development. Customized built-in elements enable developers to address the unique needs of web applications more quickly, supporting consistent user interfaces and experiences across diverse digital platforms. The paper also discusses the difficulties and concerns that must be addressed when creating customized built-in elements, such as browser compatibility, performance optimization, accessibility, security, styling, and interoperability. It emphasizes the importance of standardization, developer tooling, and community interaction in order to fully realize the potential of these features. Looking ahead, customized built-in elements have potential in a variety of applications, including the Internet of Things (IoT), e-commerce, and educational technologies. Their incorporation into Progressive Web Apps (PWAs) is expected to further improve web experiences. While obstacles remain, the article concludes that HTML5 customized built-in elements are a driver for web development innovation, allowing the production of efficient, adaptive, and user-centric web applications in an ever-changing digital context.
△ Less
Submitted 28 November, 2023;
originally announced November 2023.
-
Creating Temporally Correlated High-Resolution Profiles of Load Injection Using Constrained Generative Adversarial Networks
Authors:
Hritik Gopal Shah,
Behrouz Azimian,
Anamitra Pal
Abstract:
Traditional smart meters, which measure energy usage every 15 minutes or more and report it at least a few hours later, lack the granularity needed for real-time decision-making. To address this practical problem, we introduce a new method using generative adversarial networks (GAN) that enforces temporal consistency on its high-resolution outputs via hard inequality constraints using convex optim…
▽ More
Traditional smart meters, which measure energy usage every 15 minutes or more and report it at least a few hours later, lack the granularity needed for real-time decision-making. To address this practical problem, we introduce a new method using generative adversarial networks (GAN) that enforces temporal consistency on its high-resolution outputs via hard inequality constraints using convex optimization. A unique feature of our GAN model is that it is trained solely on slow timescale aggregated historical energy data obtained from smart meters. The results demonstrate that the model can successfully create minute-by-minute temporally correlated profiles of power usage from 15-minute interval average power consumption information. This innovative approach, emphasizing inter-neuron constraints, offers a promising avenue for improved high-speed state estimation in distribution systems and enhances the applicability of data-driven solutions for monitoring and subsequently controlling such systems.
△ Less
Submitted 1 September, 2024; v1 submitted 20 November, 2023;
originally announced November 2023.
-
INSPECT: A Multimodal Dataset for Pulmonary Embolism Diagnosis and Prognosis
Authors:
Shih-Cheng Huang,
Zepeng Huo,
Ethan Steinberg,
Chia-Chun Chiang,
Matthew P. Lungren,
Curtis P. Langlotz,
Serena Yeung,
Nigam H. Shah,
Jason A. Fries
Abstract:
Synthesizing information from multiple data sources plays a crucial role in the practice of modern medicine. Current applications of artificial intelligence in medicine often focus on single-modality data due to a lack of publicly available, multimodal medical datasets. To address this limitation, we introduce INSPECT, which contains de-identified longitudinal records from a large cohort of patien…
▽ More
Synthesizing information from multiple data sources plays a crucial role in the practice of modern medicine. Current applications of artificial intelligence in medicine often focus on single-modality data due to a lack of publicly available, multimodal medical datasets. To address this limitation, we introduce INSPECT, which contains de-identified longitudinal records from a large cohort of patients at risk for pulmonary embolism (PE), along with ground truth labels for multiple outcomes. INSPECT contains data from 19,402 patients, including CT images, radiology report impression sections, and structured electronic health record (EHR) data (i.e. demographics, diagnoses, procedures, vitals, and medications). Using INSPECT, we develop and release a benchmark for evaluating several baseline modeling approaches on a variety of important PE related tasks. We evaluate image-only, EHR-only, and multimodal fusion models. Trained models and the de-identified dataset are made available for non-commercial use under a data use agreement. To the best of our knowledge, INSPECT is the largest multimodal dataset integrating 3D medical imaging and EHR for reproducible methods evaluation and research.
△ Less
Submitted 17 November, 2023;
originally announced November 2023.
-
Text-to-Sticker: Style Tailoring Latent Diffusion Models for Human Expression
Authors:
Animesh Sinha,
Bo Sun,
Anmol Kalia,
Arantxa Casanova,
Elliot Blanchard,
David Yan,
Winnie Zhang,
Tony Nelli,
Jiahui Chen,
Hardik Shah,
Licheng Yu,
Mitesh Kumar Singh,
Ankit Ramchandani,
Maziar Sanjabi,
Sonal Gupta,
Amy Bearman,
Dhruv Mahajan
Abstract:
We introduce Style Tailoring, a recipe to finetune Latent Diffusion Models (LDMs) in a distinct domain with high visual quality, prompt alignment and scene diversity. We choose sticker image generation as the target domain, as the images significantly differ from photorealistic samples typically generated by large-scale LDMs. We start with a competent text-to-image model, like Emu, and show that r…
▽ More
We introduce Style Tailoring, a recipe to finetune Latent Diffusion Models (LDMs) in a distinct domain with high visual quality, prompt alignment and scene diversity. We choose sticker image generation as the target domain, as the images significantly differ from photorealistic samples typically generated by large-scale LDMs. We start with a competent text-to-image model, like Emu, and show that relying on prompt engineering with a photorealistic model to generate stickers leads to poor prompt alignment and scene diversity. To overcome these drawbacks, we first finetune Emu on millions of sticker-like images collected using weak supervision to elicit diversity. Next, we curate human-in-the-loop (HITL) Alignment and Style datasets from model generations, and finetune to improve prompt alignment and style alignment respectively. Sequential finetuning on these datasets poses a tradeoff between better style alignment and prompt alignment gains. To address this tradeoff, we propose a novel fine-tuning method called Style Tailoring, which jointly fits the content and style distribution and achieves best tradeoff. Evaluation results show our method improves visual quality by 14%, prompt alignment by 16.2% and scene diversity by 15.3%, compared to prompt engineering the base Emu model for stickers generation.
△ Less
Submitted 3 October, 2024; v1 submitted 16 November, 2023;
originally announced November 2023.
-
Group Testing for Accurate and Efficient Range-Based Near Neighbor Search for Plagiarism Detection
Authors:
Harsh Shah,
Kashish Mittal,
Ajit Rajwade
Abstract:
This work presents an adaptive group testing framework for the range-based high dimensional near neighbor search problem. Our method efficiently marks each item in a database as neighbor or non-neighbor of a query point, based on a cosine distance threshold without exhaustive search. Like other methods for large scale retrieval, our approach exploits the assumption that most of the items in the da…
▽ More
This work presents an adaptive group testing framework for the range-based high dimensional near neighbor search problem. Our method efficiently marks each item in a database as neighbor or non-neighbor of a query point, based on a cosine distance threshold without exhaustive search. Like other methods for large scale retrieval, our approach exploits the assumption that most of the items in the database are unrelated to the query. However, it does not assume a large difference between the cosine similarity of the query vector with the least related neighbor and that with the least unrelated non-neighbor. Following a multi-stage adaptive group testing algorithm based on binary splitting, we divide the set of items to be searched into half at each step, and perform dot product tests on smaller and smaller subsets, many of which we are able to prune away. We show that, using softmax-based features, our method achieves a more than ten-fold speed-up over exhaustive search with no loss of accuracy, on a variety of large datasets. Based on empirically verified models for the distribution of cosine distances, we present a theoretical analysis of the expected number of distance computations per query and the probability that a pool will be pruned. Our method has the following features: (i) It implicitly exploits useful distributional properties of cosine distances unlike other methods; (ii) All required data structures are created purely offline; (iii) It does not impose any strong assumptions on the number of true near neighbors; (iv) It is adaptable to streaming settings where new vectors are dynamically added to the database; and (v) It does not require any parameter tuning. The high recall of our technique makes it particularly suited to plagiarism detection scenarios where it is important to report every database item that is sufficiently similar item to the query.
△ Less
Submitted 6 September, 2024; v1 submitted 5 November, 2023;
originally announced November 2023.
-
Privacy-Preserving Financial Anomaly Detection via Federated Learning & Multi-Party Computation
Authors:
Sunpreet Arora,
Andrew Beams,
Panagiotis Chatzigiannis,
Sebastian Meiser,
Karan Patel,
Srinivasan Raghuraman,
Peter Rindal,
Harshal Shah,
Yizhen Wang,
Yuhang Wu,
Hao Yang,
Mahdi Zamani
Abstract:
One of the main goals of financial institutions (FIs) today is combating fraud and financial crime. To this end, FIs use sophisticated machine-learning models trained using data collected from their customers. The output of machine learning models may be manually reviewed for critical use cases, e.g., determining the likelihood of a transaction being anomalous and the subsequent course of action.…
▽ More
One of the main goals of financial institutions (FIs) today is combating fraud and financial crime. To this end, FIs use sophisticated machine-learning models trained using data collected from their customers. The output of machine learning models may be manually reviewed for critical use cases, e.g., determining the likelihood of a transaction being anomalous and the subsequent course of action. While advanced machine learning models greatly aid an FI in anomaly detection, model performance could be significantly improved using additional customer data from other FIs. In practice, however, an FI may not have appropriate consent from customers to share their data with other FIs. Additionally, data privacy regulations may prohibit FIs from sharing clients' sensitive data in certain geographies. Combining customer data to jointly train highly accurate anomaly detection models is therefore challenging for FIs in operational settings.
In this paper, we describe a privacy-preserving framework that allows FIs to jointly train highly accurate anomaly detection models. The framework combines the concept of federated learning with efficient multi-party computation and noisy aggregates inspired by differential privacy. The presented framework was submitted as a winning entry to the financial crime detection track of the US/UK PETs Challenge. The challenge considered an architecture where banks hold customer data and execute transactions through a central network. We show that our solution enables the network to train a highly accurate anomaly detection model while preserving privacy of customer data. Experimental results demonstrate that use of additional customer data using the proposed approach results in improvement of our anomaly detection model's AUPRC from 0.6 to 0.7. We discuss how our framework, can be generalized to other similar scenarios.
△ Less
Submitted 6 October, 2023;
originally announced October 2023.
-
OCU-Net: A Novel U-Net Architecture for Enhanced Oral Cancer Segmentation
Authors:
Ahmed Albishri,
Syed Jawad Hussain Shah,
Yugyung Lee,
Rong Wang
Abstract:
Accurate detection of oral cancer is crucial for improving patient outcomes. However, the field faces two key challenges: the scarcity of deep learning-based image segmentation research specifically targeting oral cancer and the lack of annotated data. Our study proposes OCU-Net, a pioneering U-Net image segmentation architecture exclusively designed to detect oral cancer in hematoxylin and eosin…
▽ More
Accurate detection of oral cancer is crucial for improving patient outcomes. However, the field faces two key challenges: the scarcity of deep learning-based image segmentation research specifically targeting oral cancer and the lack of annotated data. Our study proposes OCU-Net, a pioneering U-Net image segmentation architecture exclusively designed to detect oral cancer in hematoxylin and eosin (H&E) stained image datasets. OCU-Net incorporates advanced deep learning modules, such as the Channel and Spatial Attention Fusion (CSAF) module, a novel and innovative feature that emphasizes important channel and spatial areas in H&E images while exploring contextual information. In addition, OCU-Net integrates other innovative components such as Squeeze-and-Excite (SE) attention module, Atrous Spatial Pyramid Pooling (ASPP) module, residual blocks, and multi-scale fusion. The incorporation of these modules showed superior performance for oral cancer segmentation for two datasets used in this research. Furthermore, we effectively utilized the efficient ImageNet pre-trained MobileNet-V2 model as a backbone of our OCU-Net to create OCU-Netm, an enhanced version achieving state-of-the-art results. Comprehensive evaluation demonstrates that OCU-Net and OCU-Netm outperformed existing segmentation methods, highlighting their precision in identifying cancer cells in H&E images from OCDC and ORCA datasets.
△ Less
Submitted 3 October, 2023;
originally announced October 2023.
-
A Review on AI Algorithms for Energy Management in E-Mobility Services
Authors:
Sen Yan,
Maqsood Hussain Shah,
Ji Li,
Noel O'Connor,
Mingming Liu
Abstract:
E-mobility, or electric mobility, has emerged as a pivotal solution to address pressing environmental and sustainability concerns in the transportation sector. The depletion of fossil fuels, escalating greenhouse gas emissions, and the imperative to combat climate change underscore the significance of transitioning to electric vehicles (EVs). This paper seeks to explore the potential of artificial…
▽ More
E-mobility, or electric mobility, has emerged as a pivotal solution to address pressing environmental and sustainability concerns in the transportation sector. The depletion of fossil fuels, escalating greenhouse gas emissions, and the imperative to combat climate change underscore the significance of transitioning to electric vehicles (EVs). This paper seeks to explore the potential of artificial intelligence (AI) in addressing various challenges related to effective energy management in e-mobility systems (EMS). These challenges encompass critical factors such as range anxiety, charge rate optimization, and the longevity of energy storage in EVs. By analyzing existing literature, we delve into the role that AI can play in tackling these challenges and enabling efficient energy management in EMS. Our objectives are twofold: to provide an overview of the current state-of-the-art in this research domain and propose effective avenues for future investigations. Through this analysis, we aim to contribute to the advancement of sustainable and efficient e-mobility solutions, shaping a greener and more sustainable future for transportation.
△ Less
Submitted 26 September, 2023;
originally announced September 2023.
-
MedAlign: A Clinician-Generated Dataset for Instruction Following with Electronic Medical Records
Authors:
Scott L. Fleming,
Alejandro Lozano,
William J. Haberkorn,
Jenelle A. Jindal,
Eduardo P. Reis,
Rahul Thapa,
Louis Blankemeier,
Julian Z. Genkins,
Ethan Steinberg,
Ashwin Nayak,
Birju S. Patel,
Chia-Chun Chiang,
Alison Callahan,
Zepeng Huo,
Sergios Gatidis,
Scott J. Adams,
Oluseyi Fayanju,
Shreya J. Shah,
Thomas Savage,
Ethan Goh,
Akshay S. Chaudhari,
Nima Aghaeepour,
Christopher Sharp,
Michael A. Pfeffer,
Percy Liang
, et al. (5 additional authors not shown)
Abstract:
The ability of large language models (LLMs) to follow natural language instructions with human-level fluency suggests many opportunities in healthcare to reduce administrative burden and improve quality of care. However, evaluating LLMs on realistic text generation tasks for healthcare remains challenging. Existing question answering datasets for electronic health record (EHR) data fail to capture…
▽ More
The ability of large language models (LLMs) to follow natural language instructions with human-level fluency suggests many opportunities in healthcare to reduce administrative burden and improve quality of care. However, evaluating LLMs on realistic text generation tasks for healthcare remains challenging. Existing question answering datasets for electronic health record (EHR) data fail to capture the complexity of information needs and documentation burdens experienced by clinicians. To address these challenges, we introduce MedAlign, a benchmark dataset of 983 natural language instructions for EHR data. MedAlign is curated by 15 clinicians (7 specialities), includes clinician-written reference responses for 303 instructions, and provides 276 longitudinal EHRs for grounding instruction-response pairs. We used MedAlign to evaluate 6 general domain LLMs, having clinicians rank the accuracy and quality of each LLM response. We found high error rates, ranging from 35% (GPT-4) to 68% (MPT-7B-Instruct), and an 8.3% drop in accuracy moving from 32k to 2k context lengths for GPT-4. Finally, we report correlations between clinician rankings and automated natural language generation metrics as a way to rank LLMs without human review. We make MedAlign available under a research data use agreement to enable LLM evaluations on tasks aligned with clinician needs and preferences.
△ Less
Submitted 24 December, 2023; v1 submitted 27 August, 2023;
originally announced August 2023.
-
Data-Free Model Extraction Attacks in the Context of Object Detection
Authors:
Harshit Shah,
Aravindhan G,
Pavan Kulkarni,
Yuvaraj Govidarajulu,
Manojkumar Parmar
Abstract:
A significant number of machine learning models are vulnerable to model extraction attacks, which focus on stealing the models by using specially curated queries against the target model. This task is well accomplished by using part of the training data or a surrogate dataset to train a new model that mimics a target model in a white-box environment. In pragmatic situations, however, the target mo…
▽ More
A significant number of machine learning models are vulnerable to model extraction attacks, which focus on stealing the models by using specially curated queries against the target model. This task is well accomplished by using part of the training data or a surrogate dataset to train a new model that mimics a target model in a white-box environment. In pragmatic situations, however, the target models are trained on private datasets that are inaccessible to the adversary. The data-free model extraction technique replaces this problem when it comes to using queries artificially curated by a generator similar to that used in Generative Adversarial Nets. We propose for the first time, to the best of our knowledge, an adversary black box attack extending to a regression problem for predicting bounding box coordinates in object detection. As part of our study, we found that defining a loss function and using a novel generator setup is one of the key aspects in extracting the target model. We find that the proposed model extraction method achieves significant results by using reasonable queries. The discovery of this object detection vulnerability will support future prospects for securing such models.
△ Less
Submitted 9 August, 2023;
originally announced August 2023.
-
Bayesian inference for data-efficient, explainable, and safe robotic motion planning: A review
Authors:
Chengmin Zhou,
Chao Wang,
Haseeb Hassan,
Himat Shah,
Bingding Huang,
Pasi Fränti
Abstract:
Bayesian inference has many advantages in robotic motion planning over four perspectives: The uncertainty quantification of the policy, safety (risk-aware) and optimum guarantees of robot motions, data-efficiency in training of reinforcement learning, and reducing the sim2real gap when the robot is applied to real-world tasks. However, the application of Bayesian inference in robotic motion planni…
▽ More
Bayesian inference has many advantages in robotic motion planning over four perspectives: The uncertainty quantification of the policy, safety (risk-aware) and optimum guarantees of robot motions, data-efficiency in training of reinforcement learning, and reducing the sim2real gap when the robot is applied to real-world tasks. However, the application of Bayesian inference in robotic motion planning is lagging behind the comprehensive theory of Bayesian inference. Further, there are no comprehensive reviews to summarize the progress of Bayesian inference to give researchers a systematic understanding in robotic motion planning. This paper first provides the probabilistic theories of Bayesian inference which are the preliminary of Bayesian inference for complex cases. Second, the Bayesian estimation is given to estimate the posterior of policies or unknown functions which are used to compute the policy. Third, the classical model-based Bayesian RL and model-free Bayesian RL algorithms for robotic motion planning are summarized, while these algorithms in complex cases are also analyzed. Fourth, the analysis of Bayesian inference in inverse RL is given to infer the reward functions in a data-efficient manner. Fifth, we systematically present the hybridization of Bayesian inference and RL which is a promising direction to improve the convergence of RL for better motion planning. Sixth, given the Bayesian inference, we present the interpretable and safe robotic motion plannings which are the hot research topic recently. Finally, all algorithms reviewed in this paper are summarized analytically as the knowledge graphs, and the future of Bayesian inference for robotic motion planning is also discussed, to pave the way for data-efficient, explainable, and safe robotic motion planning strategies for practical applications.
△ Less
Submitted 16 July, 2023;
originally announced July 2023.
-
EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone
Authors:
Shraman Pramanick,
Yale Song,
Sayan Nag,
Kevin Qinghong Lin,
Hardik Shah,
Mike Zheng Shou,
Rama Chellappa,
Pengchuan Zhang
Abstract:
Video-language pre-training (VLP) has become increasingly important due to its ability to generalize to various vision and language tasks. However, existing egocentric VLP frameworks utilize separate video and language encoders and learn task-specific cross-modal information only during fine-tuning, limiting the development of a unified system. In this work, we introduce the second generation of e…
▽ More
Video-language pre-training (VLP) has become increasingly important due to its ability to generalize to various vision and language tasks. However, existing egocentric VLP frameworks utilize separate video and language encoders and learn task-specific cross-modal information only during fine-tuning, limiting the development of a unified system. In this work, we introduce the second generation of egocentric video-language pre-training (EgoVLPv2), a significant improvement from the previous generation, by incorporating cross-modal fusion directly into the video and language backbones. EgoVLPv2 learns strong video-text representation during pre-training and reuses the cross-modal attention modules to support different downstream tasks in a flexible and efficient manner, reducing fine-tuning costs. Moreover, our proposed fusion in the backbone strategy is more lightweight and compute-efficient than stacking additional fusion-specific layers. Extensive experiments on a wide range of VL tasks demonstrate the effectiveness of EgoVLPv2 by achieving consistent state-of-the-art performance over strong baselines across all downstream. Our project page can be found at https://shramanpramanick.github.io/EgoVLPv2/.
△ Less
Submitted 18 August, 2023; v1 submitted 11 July, 2023;
originally announced July 2023.
-
EHRSHOT: An EHR Benchmark for Few-Shot Evaluation of Foundation Models
Authors:
Michael Wornow,
Rahul Thapa,
Ethan Steinberg,
Jason A. Fries,
Nigam H. Shah
Abstract:
While the general machine learning (ML) community has benefited from public datasets, tasks, and models, the progress of ML in healthcare has been hampered by a lack of such shared assets. The success of foundation models creates new challenges for healthcare ML by requiring access to shared pretrained models to validate performance benefits. We help address these challenges through three contribu…
▽ More
While the general machine learning (ML) community has benefited from public datasets, tasks, and models, the progress of ML in healthcare has been hampered by a lack of such shared assets. The success of foundation models creates new challenges for healthcare ML by requiring access to shared pretrained models to validate performance benefits. We help address these challenges through three contributions. First, we publish a new dataset, EHRSHOT, which contains deidentified structured data from the electronic health records (EHRs) of 6,739 patients from Stanford Medicine. Unlike MIMIC-III/IV and other popular EHR datasets, EHRSHOT is longitudinal and not restricted to ICU/ED patients. Second, we publish the weights of CLMBR-T-base, a 141M parameter clinical foundation model pretrained on the structured EHR data of 2.57M patients. We are one of the first to fully release such a model for coded EHR data; in contrast, most prior models released for clinical data (e.g. GatorTron, ClinicalBERT) only work with unstructured text and cannot process the rich, structured data within an EHR. We provide an end-to-end pipeline for the community to validate and build upon its performance. Third, we define 15 few-shot clinical prediction tasks, enabling evaluation of foundation models on benefits such as sample efficiency and task adaptation. Our model and dataset are available via a research data use agreement from our website: https://ehrshot.stanford.edu. Code to reproduce our results are available at our Github repo: https://github.com/som-shahlab/ehrshot-benchmark
△ Less
Submitted 11 December, 2023; v1 submitted 5 July, 2023;
originally announced July 2023.
-
End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Authors:
Anshul Nasery,
Hardik Shah,
Arun Sai Suggala,
Prateek Jain
Abstract:
Neural network (NN) compression via techniques such as pruning, quantization requires setting compression hyperparameters (e.g., number of channels to be pruned, bitwidths for quantization) for each layer either manually or via neural architecture search (NAS) which can be computationally expensive. We address this problem by providing an end-to-end technique that optimizes for model's Floating Po…
▽ More
Neural network (NN) compression via techniques such as pruning, quantization requires setting compression hyperparameters (e.g., number of channels to be pruned, bitwidths for quantization) for each layer either manually or via neural architecture search (NAS) which can be computationally expensive. We address this problem by providing an end-to-end technique that optimizes for model's Floating Point Operations (FLOPs) or for on-device latency via a novel $\frac{\ell_1}{\ell_2}$ latency surrogate. Our algorithm is versatile and can be used with many popular compression methods including pruning, low-rank factorization, and quantization. Crucially, it is fast and runs in almost the same amount of time as single model training; which is a significant training speed-up over standard NAS methods. For BERT compression on GLUE fine-tuning tasks, we achieve $50\%$ reduction in FLOPs with only $1\%$ drop in performance. For compressing MobileNetV3 on ImageNet-1K, we achieve $15\%$ reduction in FLOPs, and $11\%$ reduction in on-device latency without drop in accuracy, while still requiring $3\times$ less training compute than SOTA compression techniques. Finally, for transfer learning on smaller datasets, our technique identifies $1.2\times$-$1.4\times$ cheaper architectures than standard MobileNetV3, EfficientNet suite of architectures at almost the same training cost and accuracy.
△ Less
Submitted 13 June, 2023; v1 submitted 9 June, 2023;
originally announced June 2023.