-
Robust Five-Class and binary Diabetic Retinopathy Classification Using Transfer Learning and Data Augmentation
Authors:
Faisal Ahmed,
Mohammad Alfrad Nobel Bhuiyan
Abstract:
Diabetic retinopathy (DR) is a leading cause of vision loss worldwide, and early diagnosis through automated retinal image analysis can significantly reduce the risk of blindness. This paper presents a robust deep learning framework for both binary and five-class DR classification, leveraging transfer learning and extensive data augmentation to address the challenges of class imbalance and limited…
▽ More
Diabetic retinopathy (DR) is a leading cause of vision loss worldwide, and early diagnosis through automated retinal image analysis can significantly reduce the risk of blindness. This paper presents a robust deep learning framework for both binary and five-class DR classification, leveraging transfer learning and extensive data augmentation to address the challenges of class imbalance and limited training data. We evaluate a range of pretrained convolutional neural network architectures, including variants of ResNet and EfficientNet, on the APTOS 2019 dataset.
For binary classification, our proposed model achieves a state-of-the-art accuracy of 98.9%, with a precision of 98.6%, recall of 99.3%, F1-score of 98.9%, and an AUC of 99.4%. In the more challenging five-class severity classification task, our model obtains a competitive accuracy of 84.6% and an AUC of 94.1%, outperforming several existing approaches. Our findings also demonstrate that EfficientNet-B0 and ResNet34 offer optimal trade-offs between accuracy and computational efficiency across both tasks.
These results underscore the effectiveness of combining class-balanced augmentation with transfer learning for high-performance DR diagnosis. The proposed framework provides a scalable and accurate solution for DR screening, with potential for deployment in real-world clinical environments.
△ Less
Submitted 22 July, 2025;
originally announced July 2025.
-
Distribution-Free Uncertainty-Aware Virtual Sensing via Conformalized Neural Operators
Authors:
Kazuma Kobayashi,
Shailesh Garg,
Farid Ahmed,
Souvik Chakraborty,
Syed Bahauddin Alam
Abstract:
Robust uncertainty quantification (UQ) remains a critical barrier to the safe deployment of deep learning in real-time virtual sensing, particularly in high-stakes domains where sparse, noisy, or non-collocated sensor data are the norm. We introduce the Conformalized Monte Carlo Operator (CMCO), a framework that transforms neural operator-based virtual sensing with calibrated, distribution-free pr…
▽ More
Robust uncertainty quantification (UQ) remains a critical barrier to the safe deployment of deep learning in real-time virtual sensing, particularly in high-stakes domains where sparse, noisy, or non-collocated sensor data are the norm. We introduce the Conformalized Monte Carlo Operator (CMCO), a framework that transforms neural operator-based virtual sensing with calibrated, distribution-free prediction intervals. By unifying Monte Carlo dropout with split conformal prediction in a single DeepONet architecture, CMCO achieves spatially resolved uncertainty estimates without retraining, ensembling, or custom loss design. Our method addresses a longstanding challenge: how to endow operator learning with efficient and reliable UQ across heterogeneous domains. Through rigorous evaluation on three distinct applications: turbulent flow, elastoplastic deformation, and global cosmic radiation dose estimation-CMCO consistently attains near-nominal empirical coverage, even in settings with strong spatial gradients and proxy-based sensing. This breakthrough offers a general-purpose, plug-and-play UQ solution for neural operators, unlocking real-time, trustworthy inference in digital twins, sensor fusion, and safety-critical monitoring. By bridging theory and deployment with minimal computational overhead, CMCO establishes a new foundation for scalable, generalizable, and uncertainty-aware scientific machine learning.
△ Less
Submitted 15 July, 2025;
originally announced July 2025.
-
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
Authors:
Gheorghe Comanici,
Eric Bieber,
Mike Schaekermann,
Ice Pasupat,
Noveen Sachdeva,
Inderjit Dhillon,
Marcel Blistein,
Ori Ram,
Dan Zhang,
Evan Rosen,
Luke Marris,
Sam Petulla,
Colin Gaffney,
Asaf Aharoni,
Nathan Lintz,
Tiago Cardal Pais,
Henrik Jacobsson,
Idan Szpektor,
Nan-Jiang Jiang,
Krishna Haridasan,
Ahmed Omran,
Nikunj Saunshi,
Dara Bahri,
Gaurav Mishra,
Eric Chu
, et al. (3284 additional authors not shown)
Abstract:
In this report, we introduce the Gemini 2.X model family: Gemini 2.5 Pro and Gemini 2.5 Flash, as well as our earlier Gemini 2.0 Flash and Flash-Lite models. Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its incredible coding and reasoning skills, Gemini 2.5 Pro is a thinking model that excels at multimodal unde…
▽ More
In this report, we introduce the Gemini 2.X model family: Gemini 2.5 Pro and Gemini 2.5 Flash, as well as our earlier Gemini 2.0 Flash and Flash-Lite models. Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its incredible coding and reasoning skills, Gemini 2.5 Pro is a thinking model that excels at multimodal understanding and it is now able to process up to 3 hours of video content. Its unique combination of long context, multimodal and reasoning capabilities can be combined to unlock new agentic workflows. Gemini 2.5 Flash provides excellent reasoning abilities at a fraction of the compute and latency requirements and Gemini 2.0 Flash and Flash-Lite provide high performance at low latency and cost. Taken together, the Gemini 2.X model generation spans the full Pareto frontier of model capability vs cost, allowing users to explore the boundaries of what is possible with complex agentic problem solving.
△ Less
Submitted 22 July, 2025; v1 submitted 7 July, 2025;
originally announced July 2025.
-
Bridging Sequential Deep Operator Network and Video Diffusion: Residual Refinement of Spatio-Temporal PDE Solutions
Authors:
Jaewan Park,
Farid Ahmed,
Kazuma Kobayashi,
Seid Koric,
Syed Bahauddin Alam,
Iwona Jasiuk,
Diab Abueidda
Abstract:
Video-diffusion models have recently set the standard in video generation, inpainting, and domain translation thanks to their training stability and high perceptual fidelity. Building on these strengths, we repurpose conditional video diffusion as a physics surrogate for spatio-temporal fields governed by partial differential equations (PDEs). Our two-stage surrogate first applies a Sequential Dee…
▽ More
Video-diffusion models have recently set the standard in video generation, inpainting, and domain translation thanks to their training stability and high perceptual fidelity. Building on these strengths, we repurpose conditional video diffusion as a physics surrogate for spatio-temporal fields governed by partial differential equations (PDEs). Our two-stage surrogate first applies a Sequential Deep Operator Network (S-DeepONet) to produce a coarse, physics-consistent prior from the prescribed boundary or loading conditions. The prior is then passed to a conditional video diffusion model that learns only the residual: the point-wise difference between the ground truth and the S-DeepONet prediction. By shifting the learning burden from the full solution to its much smaller residual space, diffusion can focus on sharpening high-frequency structures without sacrificing global coherence. The framework is assessed on two disparate benchmarks: (i) vortex-dominated lid-driven cavity flow and (ii) tensile plastic deformation of dogbone specimens. Across these data sets the hybrid surrogate consistently outperforms its single-stage counterpart, cutting the mean relative L2 error from 4.57% to 0.83% for the flow problem and from 4.42% to 2.94% for plasticity, a relative improvements of 81.8% and 33.5% respectively. The hybrid approach not only lowers quantitative errors but also improves visual quality, visibly recovering fine spatial details. These results show that (i) conditioning diffusion on a physics-aware prior enables faithful reconstruction of localized features, (ii) residual learning reduces the problem, accelerating convergence and enhancing accuracy, and (iii) the same architecture transfers seamlessly from incompressible flow to nonlinear elasto-plasticity without problem-specific architectural modifications, highlighting its broad applicability to nonlinear, time-dependent continua.
△ Less
Submitted 8 July, 2025;
originally announced July 2025.
-
CLIP-RL: Surgical Scene Segmentation Using Contrastive Language-Vision Pretraining & Reinforcement Learning
Authors:
Fatmaelzahraa Ali Ahmed,
Muhammad Arsalan,
Abdulaziz Al-Ali,
Khalid Al-Jalham,
Shidin Balakrishnan
Abstract:
Understanding surgical scenes can provide better healthcare quality for patients, especially with the vast amount of video data that is generated during MIS. Processing these videos generates valuable assets for training sophisticated models. In this paper, we introduce CLIP-RL, a novel contrastive language-image pre-training model tailored for semantic segmentation for surgical scenes. CLIP-RL pr…
▽ More
Understanding surgical scenes can provide better healthcare quality for patients, especially with the vast amount of video data that is generated during MIS. Processing these videos generates valuable assets for training sophisticated models. In this paper, we introduce CLIP-RL, a novel contrastive language-image pre-training model tailored for semantic segmentation for surgical scenes. CLIP-RL presents a new segmentation approach which involves reinforcement learning and curriculum learning, enabling continuous refinement of the segmentation masks during the full training pipeline. Our model has shown robust performance in different optical settings, such as occlusions, texture variations, and dynamic lighting, presenting significant challenges. CLIP model serves as a powerful feature extractor, capturing rich semantic context that enhances the distinction between instruments and tissues. The RL module plays a pivotal role in dynamically refining predictions through iterative action-space adjustments. We evaluated CLIP-RL on the EndoVis 2018 and EndoVis 2017 datasets. CLIP-RL achieved a mean IoU of 81%, outperforming state-of-the-art models, and a mean IoU of 74.12% on EndoVis 2017. This superior performance was achieved due to the combination of contrastive learning with reinforcement learning and curriculum learning.
△ Less
Submitted 6 July, 2025;
originally announced July 2025.
-
Surg-SegFormer: A Dual Transformer-Based Model for Holistic Surgical Scene Segmentation
Authors:
Fatimaelzahraa Ahmed,
Muraam Abdel-Ghani,
Muhammad Arsalan,
Mahmoud Ali,
Abdulaziz Al-Ali,
Shidin Balakrishnan
Abstract:
Holistic surgical scene segmentation in robot-assisted surgery (RAS) enables surgical residents to identify various anatomical tissues, articulated tools, and critical structures, such as veins and vessels. Given the firm intraoperative time constraints, it is challenging for surgeons to provide detailed real-time explanations of the operative field for trainees. This challenge is compounded by th…
▽ More
Holistic surgical scene segmentation in robot-assisted surgery (RAS) enables surgical residents to identify various anatomical tissues, articulated tools, and critical structures, such as veins and vessels. Given the firm intraoperative time constraints, it is challenging for surgeons to provide detailed real-time explanations of the operative field for trainees. This challenge is compounded by the scarcity of expert surgeons relative to trainees, making the unambiguous delineation of go- and no-go zones inconvenient. Therefore, high-performance semantic segmentation models offer a solution by providing clear postoperative analyses of surgical procedures. However, recent advanced segmentation models rely on user-generated prompts, rendering them impractical for lengthy surgical videos that commonly exceed an hour. To address this challenge, we introduce Surg-SegFormer, a novel prompt-free model that outperforms current state-of-the-art techniques. Surg-SegFormer attained a mean Intersection over Union (mIoU) of 0.80 on the EndoVis2018 dataset and 0.54 on the EndoVis2017 dataset. By providing robust and automated surgical scene comprehension, this model significantly reduces the tutoring burden on expert surgeons, empowering residents to independently and effectively understand complex surgical environments.
△ Less
Submitted 6 July, 2025;
originally announced July 2025.
-
Topological Signatures vs. Gradient Histograms: A Comparative Study for Medical Image Classification
Authors:
Faisal Ahmed,
Mohammad Alfrad Nobel Bhuiyan
Abstract:
We present the first comparative study of two fundamentally distinct feature extraction techniques: Histogram of Oriented Gradients (HOG) and Topological Data Analysis (TDA), for medical image classification using retinal fundus images. HOG captures local texture and edge patterns through gradient orientation histograms, while TDA, using cubical persistent homology, extracts high-level topological…
▽ More
We present the first comparative study of two fundamentally distinct feature extraction techniques: Histogram of Oriented Gradients (HOG) and Topological Data Analysis (TDA), for medical image classification using retinal fundus images. HOG captures local texture and edge patterns through gradient orientation histograms, while TDA, using cubical persistent homology, extracts high-level topological signatures that reflect the global structure of pixel intensities. We evaluate both methods on the large APTOS dataset for two classification tasks: binary detection (normal versus diabetic retinopathy) and five-class diabetic retinopathy severity grading. From each image, we extract 26244 HOG features and 800 TDA features, using them independently to train seven classical machine learning models with 10-fold cross-validation. XGBoost achieved the best performance in both cases: 94.29 percent accuracy (HOG) and 94.18 percent (TDA) on the binary task; 74.41 percent (HOG) and 74.69 percent (TDA) on the multi-class task. Our results show that both methods offer competitive performance but encode different structural aspects of the images. This is the first work to benchmark gradient-based and topological features on retinal imagery. The techniques are interpretable, applicable to other medical imaging domains, and suitable for integration into deep learning pipelines.
△ Less
Submitted 2 July, 2025;
originally announced July 2025.
-
Fixing It in Post: A Comparative Study of LLM Post-Training Data Quality and Model Performance
Authors:
Aladin Djuhera,
Swanand Ravindra Kadhe,
Syed Zawad,
Farhan Ahmed,
Heiko Ludwig,
Holger Boche
Abstract:
Recent work on large language models (LLMs) has increasingly focused on post-training and alignment with datasets curated to enhance instruction following, world knowledge, and specialized skills. However, most post-training datasets used in leading open- and closed-source LLMs remain inaccessible to the public, with limited information about their construction process. This lack of transparency h…
▽ More
Recent work on large language models (LLMs) has increasingly focused on post-training and alignment with datasets curated to enhance instruction following, world knowledge, and specialized skills. However, most post-training datasets used in leading open- and closed-source LLMs remain inaccessible to the public, with limited information about their construction process. This lack of transparency has motivated the recent development of open-source post-training corpora. While training on these open alternatives can yield performance comparable to that of leading models, systematic comparisons remain challenging due to the significant computational cost of conducting them rigorously at scale, and are therefore largely absent. As a result, it remains unclear how specific samples, task types, or curation strategies influence downstream performance when assessing data quality. In this work, we conduct the first comprehensive side-by-side analysis of two prominent open post-training datasets: Tulu-3-SFT-Mix and SmolTalk. Using the Magpie framework, we annotate each sample with detailed quality metrics, including turn structure (single-turn vs. multi-turn), task category, input quality, and response quality, and we derive statistics that reveal structural and qualitative similarities and differences between the two datasets. Based on these insights, we design a principled curation recipe that produces a new data mixture, TuluTalk, which contains 14% fewer samples than either source dataset while matching or exceeding their performance on key benchmarks. Our findings offer actionable insights for constructing more effective post-training datasets that improve model performance within practical resource limits. To support future research, we publicly release both the annotated source datasets and our curated TuluTalk mixture.
△ Less
Submitted 6 June, 2025;
originally announced June 2025.
-
Federated Deep Reinforcement Learning-Driven O-RAN for Automatic Multirobot Reconfiguration
Authors:
Faisal Ahmed,
Myungjin Lee,
Shao-Yu Lien,
Suresh Subramaniam,
Motoharu Matsuura,
Hiroshi Hasegawa,
Shih-Chun Lin
Abstract:
The rapid evolution of Industry 4.0 has led to the emergence of smart factories, where multirobot system autonomously operates to enhance productivity, reduce operational costs, and improve system adaptability. However, maintaining reliable and efficient network operations in these dynamic and complex environments requires advanced automation mechanisms. This study presents a zero-touch network pl…
▽ More
The rapid evolution of Industry 4.0 has led to the emergence of smart factories, where multirobot system autonomously operates to enhance productivity, reduce operational costs, and improve system adaptability. However, maintaining reliable and efficient network operations in these dynamic and complex environments requires advanced automation mechanisms. This study presents a zero-touch network platform that integrates a hierarchical Open Radio Access Network (O-RAN) architecture, enabling the seamless incorporation of advanced machine learning algorithms and dynamic management of communication and computational resources, while ensuring uninterrupted connectivity with multirobot system. Leveraging this adaptability, the platform utilizes federated deep reinforcement learning (FedDRL) to enable distributed decision-making across multiple learning agents, facilitating the adaptive parameter reconfiguration of transmitters (i.e., multirobot system) to optimize long-term system throughput and transmission energy efficiency. Simulation results demonstrate that within the proposed O-RAN-enabled zero-touch network platform, FedDRL achieves a 12% increase in system throughput, a 32% improvement in normalized average transmission energy efficiency, and a 28% reduction in average transmission energy consumption compared to baseline methods such as independent DRL.
△ Less
Submitted 31 May, 2025;
originally announced June 2025.
-
SafeCOMM: What about Safety Alignment in Fine-Tuned Telecom Large Language Models?
Authors:
Aladin Djuhera,
Swanand Ravindra Kadhe,
Farhan Ahmed,
Syed Zawad,
Holger Boche,
Walid Saad
Abstract:
Fine-tuning large language models (LLMs) for telecom tasks and datasets is a common practice to adapt general-purpose models to the telecom domain. However, little attention has been paid to how this process may compromise model safety. Recent research has shown that even benign fine-tuning can degrade the safety alignment of LLMs, causing them to respond to harmful or unethical user queries. In t…
▽ More
Fine-tuning large language models (LLMs) for telecom tasks and datasets is a common practice to adapt general-purpose models to the telecom domain. However, little attention has been paid to how this process may compromise model safety. Recent research has shown that even benign fine-tuning can degrade the safety alignment of LLMs, causing them to respond to harmful or unethical user queries. In this paper, we investigate this issue for telecom-tuned LLMs using three representative datasets featured by the GenAINet initiative. We show that safety degradation persists even for structured and seemingly harmless datasets such as 3GPP standards and tabular records, indicating that telecom-specific data is not immune to safety erosion during fine-tuning. We further extend our analysis to publicly available Telecom LLMs trained via continual pre-training, revealing that safety alignment is often severely lacking, primarily due to the omission of safety-focused instruction tuning. To address these issues in both fine-tuned and pre-trained models, we conduct extensive experiments and evaluate three safety realignment defenses (SafeInstruct, SafeLoRA, and SafeMERGE) using established red-teaming benchmarks. The results show that, across all settings, the proposed defenses can effectively restore safety after harmful degradation without compromising downstream task performance, leading to Safe teleCOMMunication (SafeCOMM) models. In a nutshell, our work serves as a diagnostic study and practical guide for safety realignment in telecom-tuned LLMs, and emphasizes the importance of safety-aware instruction and fine-tuning for real-world deployments of Telecom LLMs.
△ Less
Submitted 29 May, 2025;
originally announced June 2025.
-
VideoCAD: A Large-Scale Video Dataset for Learning UI Interactions and 3D Reasoning from CAD Software
Authors:
Brandon Man,
Ghadi Nehme,
Md Ferdous Alam,
Faez Ahmed
Abstract:
Computer-Aided Design (CAD) is a time-consuming and complex process, requiring precise, long-horizon user interactions with intricate 3D interfaces. While recent advances in AI-driven user interface (UI) agents show promise, most existing datasets and methods focus on short, low-complexity tasks in mobile or web applications, failing to capture the demands of professional engineering tools. In thi…
▽ More
Computer-Aided Design (CAD) is a time-consuming and complex process, requiring precise, long-horizon user interactions with intricate 3D interfaces. While recent advances in AI-driven user interface (UI) agents show promise, most existing datasets and methods focus on short, low-complexity tasks in mobile or web applications, failing to capture the demands of professional engineering tools. In this work, we introduce VideoCAD, the first attempt at engineering UI interaction learning for precision tasks. Specifically, VideoCAD is a large-scale synthetic dataset consisting of over 41K annotated video recordings of CAD operations, generated using an automated framework for collecting high-fidelity UI action data from human-made CAD designs. Compared to existing datasets, VideoCAD offers an order of magnitude higher complexity in UI interaction learning for real-world engineering tasks, having up to a 20x longer time horizon than other datasets. We show two important downstream applications of VideoCAD: learning UI interactions from professional precision 3D CAD tools and a visual question-answering (VQA) benchmark designed to evaluate multimodal large language models' (LLM) spatial reasoning and video understanding abilities. To learn the UI interactions, we propose VideoCADFormer - a state-of-the-art model in learning CAD interactions directly from video, which outperforms multiple behavior cloning baselines. Both VideoCADFormer and the VQA benchmark derived from VideoCAD reveal key challenges in the current state of video-based UI understanding, including the need for precise action grounding, multi-modal and spatial reasoning, and long-horizon dependencies.
△ Less
Submitted 30 May, 2025;
originally announced May 2025.
-
GIT-BO: High-Dimensional Bayesian Optimization with Tabular Foundation Models
Authors:
Rosen Ting-Ying Yu,
Cyril Picard,
Faez Ahmed
Abstract:
Bayesian optimization (BO) effectively optimizes expensive black-box functions but faces significant challenges in high-dimensional spaces (dimensions exceeding 100) due to the curse of dimensionality. Existing high-dimensional BO methods typically leverage low-dimensional embeddings or structural assumptions to mitigate this challenge, yet these approaches frequently incur considerable computatio…
▽ More
Bayesian optimization (BO) effectively optimizes expensive black-box functions but faces significant challenges in high-dimensional spaces (dimensions exceeding 100) due to the curse of dimensionality. Existing high-dimensional BO methods typically leverage low-dimensional embeddings or structural assumptions to mitigate this challenge, yet these approaches frequently incur considerable computational overhead and rigidity due to iterative surrogate retraining and fixed assumptions. To address these limitations, we propose Gradient-Informed Bayesian Optimization using Tabular Foundation Models (GIT-BO), an approach that utilizes a pre-trained tabular foundation model (TFM) as a surrogate, leveraging its gradient information to adaptively identify low-dimensional subspaces for optimization. We propose a way to exploit internal gradient computations from the TFM's forward pass by creating a gradient-informed diagnostic matrix that reveals the most sensitive directions of the TFM's predictions, enabling optimization in a continuously re-estimated active subspace without the need for repeated model retraining. Extensive empirical evaluation across 23 synthetic and real-world benchmarks demonstrates that GIT-BO consistently outperforms four state-of-the-art Gaussian process-based high-dimensional BO methods, showing superior scalability and optimization performances, especially as dimensionality increases up to 500 dimensions. This work establishes foundation models, augmented with gradient-informed adaptive subspace identification, as highly competitive alternatives to traditional Gaussian process-based approaches for high-dimensional Bayesian optimization tasks.
△ Less
Submitted 29 May, 2025; v1 submitted 26 May, 2025;
originally announced May 2025.
-
CAD-Coder: An Open-Source Vision-Language Model for Computer-Aided Design Code Generation
Authors:
Anna C. Doris,
Md Ferdous Alam,
Amin Heyrani Nobari,
Faez Ahmed
Abstract:
Efficient creation of accurate and editable 3D CAD models is critical in engineering design, significantly impacting cost and time-to-market in product innovation. Current manual workflows remain highly time-consuming and demand extensive user expertise. While recent developments in AI-driven CAD generation show promise, existing models are limited by incomplete representations of CAD operations,…
▽ More
Efficient creation of accurate and editable 3D CAD models is critical in engineering design, significantly impacting cost and time-to-market in product innovation. Current manual workflows remain highly time-consuming and demand extensive user expertise. While recent developments in AI-driven CAD generation show promise, existing models are limited by incomplete representations of CAD operations, inability to generalize to real-world images, and low output accuracy. This paper introduces CAD-Coder, an open-source Vision-Language Model (VLM) explicitly fine-tuned to generate editable CAD code (CadQuery Python) directly from visual input. Leveraging a novel dataset that we created--GenCAD-Code, consisting of over 163k CAD-model image and code pairs--CAD-Coder outperforms state-of-the-art VLM baselines such as GPT-4.5 and Qwen2.5-VL-72B, achieving a 100% valid syntax rate and the highest accuracy in 3D solid similarity. Notably, our VLM demonstrates some signs of generalizability, successfully generating CAD code from real-world images and executing CAD operations unseen during fine-tuning. The performance and adaptability of CAD-Coder highlights the potential of VLMs fine-tuned on code to streamline CAD workflows for engineers and designers. CAD-Coder is publicly available at: https://github.com/anniedoris/CAD-Coder.
△ Less
Submitted 20 May, 2025;
originally announced May 2025.
-
LLM-based Text Simplification and its Effect on User Comprehension and Cognitive Load
Authors:
Theo Guidroz,
Diego Ardila,
Jimmy Li,
Adam Mansour,
Paul Jhun,
Nina Gonzalez,
Xiang Ji,
Mike Sanchez,
Sujay Kakarmath,
Mathias MJ Bellaiche,
Miguel Ángel Garrido,
Faruk Ahmed,
Divyansh Choudhary,
Jay Hartford,
Chenwei Xu,
Henry Javier Serrano Echeverria,
Yifan Wang,
Jeff Shaffer,
Eric,
Cao,
Yossi Matias,
Avinatan Hassidim,
Dale R Webster,
Yun Liu,
Sho Fujiwara
, et al. (2 additional authors not shown)
Abstract:
Information on the web, such as scientific publications and Wikipedia, often surpasses users' reading level. To help address this, we used a self-refinement approach to develop a LLM capability for minimally lossy text simplification. To validate our approach, we conducted a randomized study involving 4563 participants and 31 texts spanning 6 broad subject areas: PubMed (biomedical scientific arti…
▽ More
Information on the web, such as scientific publications and Wikipedia, often surpasses users' reading level. To help address this, we used a self-refinement approach to develop a LLM capability for minimally lossy text simplification. To validate our approach, we conducted a randomized study involving 4563 participants and 31 texts spanning 6 broad subject areas: PubMed (biomedical scientific articles), biology, law, finance, literature/philosophy, and aerospace/computer science. Participants were randomized to viewing original or simplified texts in a subject area, and answered multiple-choice questions (MCQs) that tested their comprehension of the text. The participants were also asked to provide qualitative feedback such as task difficulty. Our results indicate that participants who read the simplified text answered more MCQs correctly than their counterparts who read the original text (3.9% absolute increase, p<0.05). This gain was most striking with PubMed (14.6%), while more moderate gains were observed for finance (5.5%), aerospace/computer science (3.8%) domains, and legal (3.5%). Notably, the results were robust to whether participants could refer back to the text while answering MCQs. The absolute accuracy decreased by up to ~9% for both original and simplified setups where participants could not refer back to the text, but the ~4% overall improvement persisted. Finally, participants' self-reported perceived ease based on a simplified NASA Task Load Index was greater for those who read the simplified text (absolute change on a 5-point scale 0.33, p<0.05). This randomized study, involving an order of magnitude more participants than prior works, demonstrates the potential of LLMs to make complex information easier to understand. Our work aims to enable a broader audience to better learn and make use of expert knowledge available on the web, improving information accessibility.
△ Less
Submitted 4 May, 2025;
originally announced May 2025.
-
ImageR: Enhancing Bug Report Clarity by Screenshots
Authors:
Xuchen Tan,
Deenu Yadav,
Faiz Ahmed,
Maleknaz Nayebi
Abstract:
In issue-tracking systems, incorporating screenshots significantly enhances the clarity of bug reports, facilitating more efficient communication and expediting issue resolution. However, determining when and what type of visual content to include remains challenging, as not all attachments effectively contribute to problem-solving; studies indicate that 22.5% of images in issue reports fail to ai…
▽ More
In issue-tracking systems, incorporating screenshots significantly enhances the clarity of bug reports, facilitating more efficient communication and expediting issue resolution. However, determining when and what type of visual content to include remains challenging, as not all attachments effectively contribute to problem-solving; studies indicate that 22.5% of images in issue reports fail to aid in resolving the reported issues. To address this, we introduce ImageR, an AI model and tool that analyzes issue reports to assess the potential benefits of including screenshots and recommends the most pertinent types when appropriate. By proactively suggesting relevant visuals, ImageR aims to make issue reports clearer, more informative, and time-efficient. We have curated and publicly shared a dataset comprising 6,235 Bugzilla issues, each meticulously labeled with the type of image attachment, providing a valuable resource for benchmarking and advancing research in image processing within developer communication contexts. To evaluate ImageR, we conducted empirical experiments on a subset of these reports from various Mozilla projects. The tool achieved an F1-score of 0.76 in determining when images are needed, with 75% of users finding its recommendations highly valuable. By minimizing the back-and-forth communication often needed to obtain suitable screenshots, ImageR streamlines the bug reporting process. Furthermore, it guides users in selecting the most effective visual documentation from ten established categories, potentially reducing resolution times and improving the quality of bug documentation. ImageR is open-source, inviting further use and improvement by the community. The labeled dataset offers a rare resource for benchmarking and exploring image processing in the context of developer communication.
△ Less
Submitted 3 May, 2025;
originally announced May 2025.
-
Inferring Questions from Programming Screenshots
Authors:
Faiz Ahmed,
Xuchen Tan,
Folajinmi Adewole,
Suprakash Datta,
Maleknaz Nayebi
Abstract:
The integration of generative AI into developer forums like Stack Overflow presents an opportunity to enhance problem-solving by allowing users to post screenshots of code or Integrated Development Environments (IDEs) instead of traditional text-based queries. This study evaluates the effectiveness of various large language models (LLMs), specifically LLAMA, GEMINI, and GPT-4o in interpreting such…
▽ More
The integration of generative AI into developer forums like Stack Overflow presents an opportunity to enhance problem-solving by allowing users to post screenshots of code or Integrated Development Environments (IDEs) instead of traditional text-based queries. This study evaluates the effectiveness of various large language models (LLMs), specifically LLAMA, GEMINI, and GPT-4o in interpreting such visual inputs. We employ prompt engineering techniques, including in-context learning, chain-of-thought prompting, and few-shot learning, to assess each model's responsiveness and accuracy. Our findings show that while GPT-4o shows promising capabilities, achieving over 60% similarity to baseline questions for 51.75% of the tested images, challenges remain in obtaining consistent and accurate interpretations for more complex images. This research advances our understanding of the feasibility of using generative AI for image-centric problem-solving in developer communities, highlighting both the potential benefits and current limitations of this approach while envisioning a future where visual-based debugging copilot tools become a reality.
△ Less
Submitted 26 April, 2025;
originally announced April 2025.
-
Learning from Less: SINDy Surrogates in RL
Authors:
Aniket Dixit,
Muhammad Ibrahim Khan,
Faizan Ahmed,
James Brusey
Abstract:
This paper introduces an approach for developing surrogate environments in reinforcement learning (RL) using the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm. We demonstrate the effectiveness of our approach through extensive experiments in OpenAI Gym environments, particularly Mountain Car and Lunar Lander. Our results show that SINDy-based surrogate models can accurately capture…
▽ More
This paper introduces an approach for developing surrogate environments in reinforcement learning (RL) using the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm. We demonstrate the effectiveness of our approach through extensive experiments in OpenAI Gym environments, particularly Mountain Car and Lunar Lander. Our results show that SINDy-based surrogate models can accurately capture the underlying dynamics of these environments while reducing computational costs by 20-35%. With only 75 interactions for Mountain Car and 1000 for Lunar Lander, we achieve state-wise correlations exceeding 0.997, with mean squared errors as low as 3.11e-06 for Mountain Car velocity and 1.42e-06 for LunarLander position. RL agents trained in these surrogate environments require fewer total steps (65,075 vs. 100,000 for Mountain Car and 801,000 vs. 1,000,000 for Lunar Lander) while achieving comparable performance to those trained in the original environments, exhibiting similar convergence patterns and final performance metrics. This work contributes to the field of model-based RL by providing an efficient method for generating accurate, interpretable surrogate environments.
△ Less
Submitted 25 April, 2025;
originally announced April 2025.
-
Continual Learning Strategies for 3D Engineering Regression Problems: A Benchmarking Study
Authors:
Kaira M. Samuel,
Faez Ahmed
Abstract:
Engineering problems that apply machine learning often involve computationally intensive methods but rely on limited datasets. As engineering data evolves with new designs and constraints, models must incorporate new knowledge over time. However, high computational costs make retraining models from scratch infeasible. Continual learning (CL) offers a promising solution by enabling models to learn…
▽ More
Engineering problems that apply machine learning often involve computationally intensive methods but rely on limited datasets. As engineering data evolves with new designs and constraints, models must incorporate new knowledge over time. However, high computational costs make retraining models from scratch infeasible. Continual learning (CL) offers a promising solution by enabling models to learn from sequential data while mitigating catastrophic forgetting, where a model forgets previously learned mappings. This work introduces CL to engineering design by benchmarking several CL methods on representative regression tasks. We apply these strategies to five engineering datasets and construct nine new engineering CL benchmarks to evaluate their ability to address forgetting and improve generalization. Preliminary results show that applying existing CL methods to these tasks improves performance over naive baselines. In particular, the Replay strategy achieved performance comparable to retraining in several benchmarks while reducing training time by nearly half, demonstrating its potential for real-world engineering workflows. The code and datasets used in this work will be available at: https://github.com/kmsamuel/cl-for-engineering-release.
△ Less
Submitted 16 April, 2025;
originally announced April 2025.
-
C-SHAP for time series: An approach to high-level temporal explanations
Authors:
Annemarie Jutte,
Faizan Ahmed,
Jeroen Linssen,
Maurice van Keulen
Abstract:
Time series are ubiquitous in domains such as energy forecasting, healthcare, and industry. Using AI systems, some tasks within these domains can be efficiently handled. Explainable AI (XAI) aims to increase the reliability of AI solutions by explaining model reasoning. For time series, many XAI methods provide point- or sequence-based attribution maps. These methods explain model reasoning in ter…
▽ More
Time series are ubiquitous in domains such as energy forecasting, healthcare, and industry. Using AI systems, some tasks within these domains can be efficiently handled. Explainable AI (XAI) aims to increase the reliability of AI solutions by explaining model reasoning. For time series, many XAI methods provide point- or sequence-based attribution maps. These methods explain model reasoning in terms of low-level patterns. However, they do not capture high-level patterns that may also influence model reasoning. We propose a concept-based method to provide explanations in terms of these high-level patterns. In this paper, we present C-SHAP for time series, an approach which determines the contribution of concepts to a model outcome. We provide a general definition of C-SHAP and present an example implementation using time series decomposition. Additionally, we demonstrate the effectiveness of the methodology through a use case from the energy domain.
△ Less
Submitted 15 April, 2025;
originally announced April 2025.
-
How Is Generative AI Used for Persona Development?: A Systematic Review of 52 Research Articles
Authors:
Danial Amin,
Joni Salminen,
Farhan Ahmed,
Sonja M. H. Tervola,
Sankalp Sethi,
Bernard J. Jansen
Abstract:
Although Generative AI (GenAI) has the potential for persona development, many challenges must be addressed. This research systematically reviews 52 articles from 2022-2024, with important findings. First, closed commercial models are frequently used in persona development, creating a monoculture Second, GenAI is used in various stages of persona development (data collection, segmentation, enrichm…
▽ More
Although Generative AI (GenAI) has the potential for persona development, many challenges must be addressed. This research systematically reviews 52 articles from 2022-2024, with important findings. First, closed commercial models are frequently used in persona development, creating a monoculture Second, GenAI is used in various stages of persona development (data collection, segmentation, enrichment, and evaluation). Third, similar to other quantitative persona development techniques, there are major gaps in persona evaluation for AI generated personas. Fourth, human-AI collaboration models are underdeveloped, despite human oversight being crucial for maintaining ethical standards. These findings imply that realizing the full potential of AI-generated personas will require substantial efforts across academia and industry. To that end, we provide a list of research avenues to inspire future work.
△ Less
Submitted 7 April, 2025;
originally announced April 2025.
-
AI Judges in Design: Statistical Perspectives on Achieving Human Expert Equivalence With Vision-Language Models
Authors:
Kristen M. Edwards,
Farnaz Tehranchi,
Scarlett R. Miller,
Faez Ahmed
Abstract:
The subjective evaluation of early stage engineering designs, such as conceptual sketches, traditionally relies on human experts. However, expert evaluations are time-consuming, expensive, and sometimes inconsistent. Recent advances in vision-language models (VLMs) offer the potential to automate design assessments, but it is crucial to ensure that these AI ``judges'' perform on par with human exp…
▽ More
The subjective evaluation of early stage engineering designs, such as conceptual sketches, traditionally relies on human experts. However, expert evaluations are time-consuming, expensive, and sometimes inconsistent. Recent advances in vision-language models (VLMs) offer the potential to automate design assessments, but it is crucial to ensure that these AI ``judges'' perform on par with human experts. However, no existing framework assesses expert equivalence. This paper introduces a rigorous statistical framework to determine whether an AI judge's ratings match those of human experts. We apply this framework in a case study evaluating four VLM-based judges on key design metrics (uniqueness, creativity, usefulness, and drawing quality). These AI judges employ various in-context learning (ICL) techniques, including uni- vs. multimodal prompts and inference-time reasoning. The same statistical framework is used to assess three trained novices for expert-equivalence. Results show that the top-performing AI judge, using text- and image-based ICL with reasoning, achieves expert-level agreement for uniqueness and drawing quality and outperforms or matches trained novices across all metrics. In 6/6 runs for both uniqueness and creativity, and 5/6 runs for both drawing quality and usefulness, its agreement with experts meets or exceeds that of the majority of trained novices. These findings suggest that reasoning-supported VLM models can achieve human-expert equivalence in design evaluation. This has implications for scaling design evaluation in education and practice, and provides a general statistical framework for validating AI judges in other domains requiring subjective content evaluation.
△ Less
Submitted 1 April, 2025;
originally announced April 2025.
-
AI Agents in Engineering Design: A Multi-Agent Framework for Aesthetic and Aerodynamic Car Design
Authors:
Mohamed Elrefaie,
Janet Qian,
Raina Wu,
Qian Chen,
Angela Dai,
Faez Ahmed
Abstract:
We introduce the concept of "Design Agents" for engineering applications, particularly focusing on the automotive design process, while emphasizing that our approach can be readily extended to other engineering and design domains. Our framework integrates AI-driven design agents into the traditional engineering workflow, demonstrating how these specialized computational agents interact seamlessly…
▽ More
We introduce the concept of "Design Agents" for engineering applications, particularly focusing on the automotive design process, while emphasizing that our approach can be readily extended to other engineering and design domains. Our framework integrates AI-driven design agents into the traditional engineering workflow, demonstrating how these specialized computational agents interact seamlessly with engineers and designers to augment creativity, enhance efficiency, and significantly accelerate the overall design cycle. By automating and streamlining tasks traditionally performed manually, such as conceptual sketching, styling enhancements, 3D shape retrieval and generative modeling, computational fluid dynamics (CFD) meshing, and aerodynamic simulations, our approach reduces certain aspects of the conventional workflow from weeks and days down to minutes. These agents leverage state-of-the-art vision-language models (VLMs), large language models (LLMs), and geometric deep learning techniques, providing rapid iteration and comprehensive design exploration capabilities. We ground our methodology in industry-standard benchmarks, encompassing a wide variety of conventional automotive designs, and utilize high-fidelity aerodynamic simulations to ensure practical and applicable outcomes. Furthermore, we present design agents that can swiftly and accurately predict simulation outcomes, empowering engineers and designers to engage in more informed design optimization and exploration. This research underscores the transformative potential of integrating advanced generative AI techniques into complex engineering tasks, paving the way for broader adoption and innovation across multiple engineering disciplines.
△ Less
Submitted 30 March, 2025;
originally announced March 2025.
-
ModalTune: Fine-Tuning Slide-Level Foundation Models with Multi-Modal Information for Multi-task Learning in Digital Pathology
Authors:
Vishwesh Ramanathan,
Tony Xu,
Pushpak Pati,
Faruk Ahmed,
Maged Goubran,
Anne L. Martel
Abstract:
Prediction tasks in digital pathology are challenging due to the massive size of whole-slide images (WSIs) and the weak nature of training signals. Advances in computing, data availability, and self-supervised learning (SSL) have paved the way for slide-level foundation models (SLFMs) that can improve prediction tasks in low-data regimes. However, working with these models is challenging, with iss…
▽ More
Prediction tasks in digital pathology are challenging due to the massive size of whole-slide images (WSIs) and the weak nature of training signals. Advances in computing, data availability, and self-supervised learning (SSL) have paved the way for slide-level foundation models (SLFMs) that can improve prediction tasks in low-data regimes. However, working with these models is challenging, with issues such as catastrophic forgetting during fine-tuning and under-utilization of shared information between tasks and modalities. To overcome these two challenges, we propose ModalTune, a novel fine-tuning framework which introduces the Modal Adapter to integrate new modalities without modifying SLFM weights. Additionally, we use large-language models (LLMs) to encode labels as text, capturing semantic relationships and enhancing generalization across multiple tasks and cancer types in a single training recipe. ModalTune achieves state-of-the-art (SOTA) results against both uni-modal and multi-modal models across four cancer types, jointly improving survival and cancer subtype prediction while remaining competitive in pan-cancer settings. Additionally, we show ModalTune is highly generalizable to two out-of-distribution (OOD) datasets. To our knowledge, this is the first unified fine-tuning framework for multi-modal, multi-task, and pan-cancer modeling in digital pathology.
△ Less
Submitted 21 March, 2025;
originally announced March 2025.
-
TripNet: Learning Large-scale High-fidelity 3D Car Aerodynamics with Triplane Networks
Authors:
Qian Chen,
Mohamed Elrefaie,
Angela Dai,
Faez Ahmed
Abstract:
Surrogate modeling has emerged as a powerful tool to accelerate Computational Fluid Dynamics (CFD) simulations. Existing 3D geometric learning models based on point clouds, voxels, meshes, or graphs depend on explicit geometric representations that are memory-intensive and resolution-limited. For large-scale simulations with millions of nodes and cells, existing models require aggressive downsampl…
▽ More
Surrogate modeling has emerged as a powerful tool to accelerate Computational Fluid Dynamics (CFD) simulations. Existing 3D geometric learning models based on point clouds, voxels, meshes, or graphs depend on explicit geometric representations that are memory-intensive and resolution-limited. For large-scale simulations with millions of nodes and cells, existing models require aggressive downsampling due to their dependence on mesh resolution, resulting in degraded accuracy. We present TripNet, a triplane-based neural framework that implicitly encodes 3D geometry into a compact, continuous feature map with fixed dimension. Unlike mesh-dependent approaches, TripNet scales to high-resolution simulations without increasing memory cost, and enables CFD predictions at arbitrary spatial locations in a query-based fashion, independent of mesh connectivity or predefined nodes. TripNet achieves state-of-the-art performance on the DrivAerNet and DrivAerNet++ datasets, accurately predicting drag coefficients, surface pressure, and full 3D flow fields. With a unified triplane backbone supporting multiple simulation tasks, TripNet offers a scalable, accurate, and efficient alternative to traditional CFD solvers and existing surrogate models.
△ Less
Submitted 23 May, 2025; v1 submitted 19 March, 2025;
originally announced March 2025.
-
SafeMERGE: Preserving Safety Alignment in Fine-Tuned Large Language Models via Selective Layer-Wise Model Merging
Authors:
Aladin Djuhera,
Swanand Ravindra Kadhe,
Farhan Ahmed,
Syed Zawad,
Holger Boche
Abstract:
Fine-tuning large language models (LLMs) on downstream tasks can inadvertently erode their safety alignment, even for benign fine-tuning datasets. We address this challenge by proposing SafeMERGE, a post-fine-tuning framework that preserves safety while maintaining task utility. It achieves this by selectively merging fine-tuned and safety-aligned model layers only when those deviate from safe beh…
▽ More
Fine-tuning large language models (LLMs) on downstream tasks can inadvertently erode their safety alignment, even for benign fine-tuning datasets. We address this challenge by proposing SafeMERGE, a post-fine-tuning framework that preserves safety while maintaining task utility. It achieves this by selectively merging fine-tuned and safety-aligned model layers only when those deviate from safe behavior, measured by a cosine similarity criterion. We evaluate SafeMERGE against other fine-tuning- and post-fine-tuning-stage approaches for Llama-2-7B-Chat and Qwen-2-7B-Instruct models on GSM8K and PubMedQA tasks while exploring different merging strategies. We find that SafeMERGE consistently reduces harmful outputs compared to other baselines without significantly sacrificing performance, sometimes even enhancing it. The results suggest that our selective, subspace-guided, and per-layer merging method provides an effective safeguard against the inadvertent loss of safety in fine-tuned LLMs while outperforming simpler post-fine-tuning-stage defenses.
△ Less
Submitted 21 March, 2025;
originally announced March 2025.
-
Image Encryption Using DNA Encoding, Snake Permutation and Chaotic Substitution Techniques
Authors:
Waleed Ahmed Farooqui,
Jawad Ahmad,
Nadeem Kureshi,
Fawad Ahmed,
Aizaz Ahmad Khattak,
Muhammad Shahbaz Khan
Abstract:
Securing image data in IoT networks and other insecure information channels is a matter of critical concern. This paper presents a new image encryption scheme using DNA encoding, snake permutation and chaotic substitution techniques that ensures robust security of the image data with reduced computational overhead. The DNA encoding and snake permutation modules ensure effective scrambling of the p…
▽ More
Securing image data in IoT networks and other insecure information channels is a matter of critical concern. This paper presents a new image encryption scheme using DNA encoding, snake permutation and chaotic substitution techniques that ensures robust security of the image data with reduced computational overhead. The DNA encoding and snake permutation modules ensure effective scrambling of the pixels and result in efficient diffusion in the plaintext image. For the confusion part, the chaotic substitution technique is implemented, which substitutes the pixel values chosen randomly from 3 S-boxes. Extensive security analysis validate the efficacy of the image encryption algorithm proposed in this paper and results demonstrate that the encrypted images have an ideal information entropy of 7.9895 and an almost zero correlation coefficient of -0.001660. These results indicate a high degree of randomness and no correlation in the encrypted image.
△ Less
Submitted 11 March, 2025;
originally announced March 2025.
-
Fall Detection from Indoor Videos using MediaPipe and Handcrafted Feature
Authors:
Fatima Ahmed,
Parag Biswas,
Abdur Rashid,
Md. Khaliluzzaman
Abstract:
Falls are a common cause of fatal injuries and hospitalization. However, having fall detection on person, in particular for senior citizens can prove to be critical. Presently,there are handheld, ambient detector and vision-based detection techniques being utilized for fall detection. However, the approaches have issues with accuracy and cost. In this regard, in this research, an approach is propo…
▽ More
Falls are a common cause of fatal injuries and hospitalization. However, having fall detection on person, in particular for senior citizens can prove to be critical. Presently,there are handheld, ambient detector and vision-based detection techniques being utilized for fall detection. However, the approaches have issues with accuracy and cost. In this regard, in this research, an approach is proposed to detect falls in indoor environments utilizing the handcrafted features extracted from human body skeleton. The human body skeleton is formed using MediaPipe framework. Results on UR Fall detection show the superiority of our model, capable of detecting falls correctly in a wide number of settings involving people belonging to different ages and genders. This proposed model using MediaPipe for fall classification in daily activities achieves significant accuracy compare to the present existing approaches.
△ Less
Submitted 3 March, 2025;
originally announced March 2025.
-
GneissWeb: Preparing High Quality Data for LLMs at Scale
Authors:
Hajar Emami Gohari,
Swanand Ravindra Kadhe,
Syed Yousaf Shah. Constantin Adam,
Abdulhamid Adebayo,
Praneet Adusumilli,
Farhan Ahmed,
Nathalie Baracaldo Angel,
Santosh Borse,
Yuan-Chi Chang,
Xuan-Hong Dang,
Nirmit Desai,
Ravital Eres,
Ran Iwamoto,
Alexei Karve,
Yan Koyfman,
Wei-Han Lee,
Changchang Liu,
Boris Lublinsky,
Takuyo Ohko,
Pablo Pesce,
Maroun Touma,
Shiqiang Wang,
Shalisha Witherspoon,
Herbert Woisetschlager,
David Wood
, et al. (6 additional authors not shown)
Abstract:
Data quantity and quality play a vital role in determining the performance of Large Language Models (LLMs). High-quality data, in particular, can significantly boost the LLM's ability to generalize on a wide range of downstream tasks. Large pre-training datasets for leading LLMs remain inaccessible to the public, whereas many open datasets are small in size (less than 5 trillion tokens), limiting…
▽ More
Data quantity and quality play a vital role in determining the performance of Large Language Models (LLMs). High-quality data, in particular, can significantly boost the LLM's ability to generalize on a wide range of downstream tasks. Large pre-training datasets for leading LLMs remain inaccessible to the public, whereas many open datasets are small in size (less than 5 trillion tokens), limiting their suitability for training large models.
In this paper, we introduce GneissWeb, a large dataset yielding around 10 trillion tokens that caters to the data quality and quantity requirements of training LLMs. Our GneissWeb recipe that produced the dataset consists of sharded exact sub-string deduplication and a judiciously constructed ensemble of quality filters. GneissWeb achieves a favorable trade-off between data quality and quantity, producing models that outperform models trained on state-of-the-art open large datasets (5+ trillion tokens).
We show that models trained using GneissWeb dataset outperform those trained on FineWeb-V1.1.0 by 2.73 percentage points in terms of average score computed on a set of 11 commonly used benchmarks (both zero-shot and few-shot) for pre-training dataset evaluation. When the evaluation set is extended to 20 benchmarks (both zero-shot and few-shot), models trained using GneissWeb still achieve a 1.75 percentage points advantage over those trained on FineWeb-V1.1.0.
△ Less
Submitted 18 February, 2025;
originally announced February 2025.
-
PolyPath: Adapting a Large Multimodal Model for Multi-slide Pathology Report Generation
Authors:
Faruk Ahmed,
Lin Yang,
Tiam Jaroensri,
Andrew Sellergren,
Yossi Matias,
Avinatan Hassidim,
Greg S. Corrado,
Dale R. Webster,
Shravya Shetty,
Shruthi Prabhakara,
Yun Liu,
Daniel Golden,
Ellery Wulczyn,
David F. Steiner
Abstract:
The interpretation of histopathology cases underlies many important diagnostic and treatment decisions in medicine. Notably, this process typically requires pathologists to integrate and summarize findings across multiple slides per case. Existing vision-language capabilities in computational pathology have so far been largely limited to small regions of interest, larger regions at low magnificati…
▽ More
The interpretation of histopathology cases underlies many important diagnostic and treatment decisions in medicine. Notably, this process typically requires pathologists to integrate and summarize findings across multiple slides per case. Existing vision-language capabilities in computational pathology have so far been largely limited to small regions of interest, larger regions at low magnification, or single whole-slide images (WSIs). This limits interpretation of findings that span multiple high-magnification regions across multiple WSIs. By making use of Gemini 1.5 Flash, a large multimodal model (LMM) with a 1-million token context window, we demonstrate the ability to generate bottom-line diagnoses from up to 40,000 768x768 pixel image patches from multiple WSIs at 10X magnification. This is the equivalent of up to 11 hours of video at 1 fps. Expert pathologist evaluations demonstrate that the generated report text is clinically accurate and equivalent to or preferred over the original reporting for 68% (95% CI: [60%, 76%]) of multi-slide examples with up to 5 slides. While performance decreased for examples with 6 or more slides, this study demonstrates the promise of leveraging the long-context capabilities of modern LMMs for the uniquely challenging task of medical report generation where each case can contain thousands of image patches.
△ Less
Submitted 14 February, 2025;
originally announced February 2025.
-
Offshore Wind Turbine Tower Design and Optimization: A Review and AI-Driven Future Directions
Authors:
João Alves Ribeiro,
Bruno Alves Ribeiro,
Francisco Pimenta,
Sérgio M. O. Tavares,
Jie Zhang,
Faez Ahmed
Abstract:
Offshore wind energy leverages the high intensity and consistency of oceanic winds, playing a key role in the transition to renewable energy. As energy demands grow, larger turbines are required to optimize power generation and reduce the Levelized Cost of Energy (LCoE), which represents the average cost of electricity over a project's lifetime. However, upscaling turbines introduces engineering c…
▽ More
Offshore wind energy leverages the high intensity and consistency of oceanic winds, playing a key role in the transition to renewable energy. As energy demands grow, larger turbines are required to optimize power generation and reduce the Levelized Cost of Energy (LCoE), which represents the average cost of electricity over a project's lifetime. However, upscaling turbines introduces engineering challenges, particularly in the design of supporting structures, especially towers. These towers must support increased loads while maintaining structural integrity, cost-efficiency, and transportability, making them essential to offshore wind projects' success. This paper presents a comprehensive review of the latest advancements, challenges, and future directions driven by Artificial Intelligence (AI) in the design optimization of Offshore Wind Turbine (OWT) structures, with a focus on towers. It provides an in-depth background on key areas such as design types, load types, analysis methods, design processes, monitoring systems, Digital Twin (DT), software, standards, reference turbines, economic factors, and optimization techniques. Additionally, it includes a state-of-the-art review of optimization studies related to tower design optimization, presenting a detailed examination of turbine, software, loads, optimization method, design variables and constraints, analysis, and findings, motivating future research to refine design approaches for effective turbine upscaling and improved efficiency. Lastly, the paper explores future directions where AI can revolutionize tower design optimization, enabling the development of efficient, scalable, and sustainable structures. By addressing the upscaling challenges and supporting the growth of renewable energy, this work contributes to shaping the future of offshore wind turbine towers and others supporting structures.
△ Less
Submitted 28 December, 2024;
originally announced February 2025.
-
Activation-Informed Merging of Large Language Models
Authors:
Amin Heyrani Nobari,
Kaveh Alimohammadi,
Ali ArjomandBigdeli,
Akash Srivastava,
Faez Ahmed,
Navid Azizan
Abstract:
Model merging, a method that combines the parameters and embeddings of multiple fine-tuned large language models (LLMs), offers a promising approach to enhance model performance across various tasks while maintaining computational efficiency. This paper introduces Activation-Informed Merging (AIM), a technique that integrates the information from the activation space of LLMs into the merging proce…
▽ More
Model merging, a method that combines the parameters and embeddings of multiple fine-tuned large language models (LLMs), offers a promising approach to enhance model performance across various tasks while maintaining computational efficiency. This paper introduces Activation-Informed Merging (AIM), a technique that integrates the information from the activation space of LLMs into the merging process to improve performance and robustness. AIM is designed as a flexible, complementary solution that is applicable to any existing merging method. It aims to preserve critical weights from the base model, drawing on principles from continual learning (CL) and model compression. Utilizing a task-agnostic calibration set, AIM selectively prioritizes essential weights during merging. We empirically demonstrate that AIM significantly enhances the performance of merged models across multiple benchmarks. Our findings suggest that considering the activation-space information can provide substantial advancements in the model merging strategies for LLMs, with up to a 40% increase in benchmark performance.
△ Less
Submitted 14 June, 2025; v1 submitted 4 February, 2025;
originally announced February 2025.
-
IoT-enabled Drowsiness Driver Safety Alert System with Real-Time Monitoring Using Integrated Sensors Technology
Authors:
Bakhtiar Muiz,
Abdul Hasib,
Md. Faishal Ahmed,
Abdullah Al Zubaer,
Rakib Hossen,
Mst Deloara Khushi,
Anichur Rahman
Abstract:
Significant losses in terms of life and property occur from road traffic accidents, which are often caused by drunk and drowsy drivers. Reducing accidents requires effective detection of alcohol impairment and drowsiness as well as real-time driver monitoring. This paper aims to create an Internet of Things (IoT)--enabled Drowsiness Driver Safety Alert System with Real-Time Monitoring Using Integr…
▽ More
Significant losses in terms of life and property occur from road traffic accidents, which are often caused by drunk and drowsy drivers. Reducing accidents requires effective detection of alcohol impairment and drowsiness as well as real-time driver monitoring. This paper aims to create an Internet of Things (IoT)--enabled Drowsiness Driver Safety Alert System with Real-Time Monitoring Using Integrated Sensors Technology. The system features an alcohol sensor and an IR sensor for detecting alcohol presence and monitoring driver eye movements, respectively. Upon detecting alcohol, alarms and warning lights are activated, the vehicle speed is progressively reduced, and the motor stops within ten to fifteen seconds if the alcohol presence persists. The IR sensor monitors prolonged eye closure, triggering alerts, or automatic vehicle stoppage to prevent accidents caused by drowsiness. Data from the IR sensor is transmitted to a mobile phone via Bluetooth for real-time monitoring and alerts. By identifying driver alcoholism and drowsiness, this system seeks to reduce accidents and save lives by providing safer transportation.
△ Less
Submitted 1 February, 2025;
originally announced February 2025.
-
Generative Optimization: A Perspective on AI-Enhanced Problem Solving in Engineering
Authors:
Cyril Picard,
Lyle Regenwetter,
Amin Heyrani Nobari,
Akash Srivastava,
Faez Ahmed
Abstract:
The field of engineering is shaped by the tools and methods used to solve problems. Optimization is one such class of powerful, robust, and effective engineering tools proven over decades of use. Within just a few years, generative artificial intelligence (GenAI) has risen as another promising tool for general-purpose problem-solving. While optimization shines at finding high-quality and precise s…
▽ More
The field of engineering is shaped by the tools and methods used to solve problems. Optimization is one such class of powerful, robust, and effective engineering tools proven over decades of use. Within just a few years, generative artificial intelligence (GenAI) has risen as another promising tool for general-purpose problem-solving. While optimization shines at finding high-quality and precise solutions that satisfy constraints, GenAI excels at inferring problem requirements, bridging solution domains, handling mixed data modalities, and rapidly generating copious numbers of solutions. These differing attributes also make the two frameworks complementary. Hybrid generative optimization algorithms present a new paradigm for engineering problem-solving and have shown promise across a few engineering applications. We expect significant developments in the near future around generative optimization, leading to changes in how engineers solve problems using computational tools. We offer our perspective on existing methods, areas of promise, and key research questions.
△ Less
Submitted 17 December, 2024;
originally announced December 2024.
-
Hyperedge Anomaly Detection with Hypergraph Neural Network
Authors:
Md. Tanvir Alam,
Chowdhury Farhan Ahmed,
Carson K. Leung
Abstract:
Hypergraph is a data structure that enables us to model higher-order associations among data entities. Conventional graph-structured data can represent pairwise relationships only, whereas hypergraph enables us to associate any number of entities, which is essential in many real-life applications. Hypergraph learning algorithms have been well-studied for numerous problem settings, such as node cla…
▽ More
Hypergraph is a data structure that enables us to model higher-order associations among data entities. Conventional graph-structured data can represent pairwise relationships only, whereas hypergraph enables us to associate any number of entities, which is essential in many real-life applications. Hypergraph learning algorithms have been well-studied for numerous problem settings, such as node classification, link prediction, etc. However, much less research has been conducted on anomaly detection from hypergraphs. Anomaly detection identifies events that deviate from the usual pattern and can be applied to hypergraphs to detect unusual higher-order associations. In this work, we propose an end-to-end hypergraph neural network-based model for identifying anomalous associations in a hypergraph. Our proposed algorithm operates in an unsupervised manner without requiring any labeled data. Extensive experimentation on several real-life datasets demonstrates the effectiveness of our model in detecting anomalous hyperedges.
△ Less
Submitted 7 December, 2024;
originally announced December 2024.
-
Parametric-ControlNet: Multimodal Control in Foundation Models for Precise Engineering Design Synthesis
Authors:
Rui Zhou,
Yanxia Zhang,
Chenyang Yuan,
Frank Permenter,
Nikos Arechiga,
Matt Klenk,
Faez Ahmed
Abstract:
This paper introduces a generative model designed for multimodal control over text-to-image foundation generative AI models such as Stable Diffusion, specifically tailored for engineering design synthesis. Our model proposes parametric, image, and text control modalities to enhance design precision and diversity. Firstly, it handles both partial and complete parametric inputs using a diffusion mod…
▽ More
This paper introduces a generative model designed for multimodal control over text-to-image foundation generative AI models such as Stable Diffusion, specifically tailored for engineering design synthesis. Our model proposes parametric, image, and text control modalities to enhance design precision and diversity. Firstly, it handles both partial and complete parametric inputs using a diffusion model that acts as a design autocomplete co-pilot, coupled with a parametric encoder to process the information. Secondly, the model utilizes assembly graphs to systematically assemble input component images, which are then processed through a component encoder to capture essential visual data. Thirdly, textual descriptions are integrated via CLIP encoding, ensuring a comprehensive interpretation of design intent. These diverse inputs are synthesized through a multimodal fusion technique, creating a joint embedding that acts as the input to a module inspired by ControlNet. This integration allows the model to apply robust multimodal control to foundation models, facilitating the generation of complex and precise engineering designs. This approach broadens the capabilities of AI-driven design tools and demonstrates significant advancements in precise control based on diverse data modalities for enhanced design generation.
△ Less
Submitted 5 December, 2024;
originally announced December 2024.
-
Virtual Sensing to Enable Real-Time Monitoring of Inaccessible Locations \& Unmeasurable Parameters
Authors:
Kazuma Kobayashi,
Farid Ahmed,
Syed Bahauddin Alam
Abstract:
Real-time monitoring of critical parameters is essential for energy systems' safe and efficient operation. However, traditional sensors often fail and degrade in harsh environments where physical sensors cannot be placed (inaccessible locations). In addition, there are important parameters that cannot be directly measured by sensors. We need machine learning (ML)-based real-time monitoring in thos…
▽ More
Real-time monitoring of critical parameters is essential for energy systems' safe and efficient operation. However, traditional sensors often fail and degrade in harsh environments where physical sensors cannot be placed (inaccessible locations). In addition, there are important parameters that cannot be directly measured by sensors. We need machine learning (ML)-based real-time monitoring in those remote locations to ensure system operations. However, traditional ML models struggle to process continuous sensor profile data to fit model requirements, leading to the loss of spatial relationships. Another challenge for real-time monitoring is ``dataset shift" and the need for frequent retraining under varying conditions, where extensive retraining prohibits real-time inference. To resolve these challenges, this study addressed the limitations of real-time monitoring methods by enabling monitoring in locations where physical sensors are impractical to deploy. Our proposed approach, utilizing Multi-Input Operator Network virtual sensors, leverages deep learning to seamlessly integrate diverse data sources and accurately predict key parameters in real-time without the need for additional physical sensors. The approach's effectiveness is demonstrated through thermal-hydraulic monitoring in a nuclear reactor subchannel, achieving remarkable accuracy.
△ Less
Submitted 27 November, 2024;
originally announced December 2024.
-
Health AI Developer Foundations
Authors:
Atilla P. Kiraly,
Sebastien Baur,
Kenneth Philbrick,
Fereshteh Mahvar,
Liron Yatziv,
Tiffany Chen,
Bram Sterling,
Nick George,
Fayaz Jamil,
Jing Tang,
Kai Bailey,
Faruk Ahmed,
Akshay Goel,
Abbi Ward,
Lin Yang,
Andrew Sellergren,
Yossi Matias,
Avinatan Hassidim,
Shravya Shetty,
Daniel Golden,
Shekoofeh Azizi,
David F. Steiner,
Yun Liu,
Tim Thelin,
Rory Pilgrim
, et al. (1 additional authors not shown)
Abstract:
Robust medical Machine Learning (ML) models have the potential to revolutionize healthcare by accelerating clinical research, improving workflows and outcomes, and producing novel insights or capabilities. Developing such ML models from scratch is cost prohibitive and requires substantial compute, data, and time (e.g., expert labeling). To address these challenges, we introduce Health AI Developer…
▽ More
Robust medical Machine Learning (ML) models have the potential to revolutionize healthcare by accelerating clinical research, improving workflows and outcomes, and producing novel insights or capabilities. Developing such ML models from scratch is cost prohibitive and requires substantial compute, data, and time (e.g., expert labeling). To address these challenges, we introduce Health AI Developer Foundations (HAI-DEF), a suite of pre-trained, domain-specific foundation models, tools, and recipes to accelerate building ML for health applications. The models cover various modalities and domains, including radiology (X-rays and computed tomography), histopathology, dermatological imaging, and audio. These models provide domain specific embeddings that facilitate AI development with less labeled data, shorter training times, and reduced computational costs compared to traditional approaches. In addition, we utilize a common interface and style across these models, and prioritize usability to enable developers to integrate HAI-DEF efficiently. We present model evaluations across various tasks and conclude with a discussion of their application and evaluation, covering the importance of ensuring efficacy, fairness, and equity. Finally, while HAI-DEF and specifically the foundation models lower the barrier to entry for ML in healthcare, we emphasize the importance of validation with problem- and population-specific data for each desired usage setting. This technical report will be updated over time as more modalities and features are added.
△ Less
Submitted 26 November, 2024; v1 submitted 22 November, 2024;
originally announced November 2024.
-
Design and Development of a Localized E-Commerce Solution for Students focussing on Economical Sharing
Authors:
Faiz Ahmed,
Nitin Kumar Jha,
Md Faizan
Abstract:
The rapid adoption of e-commerce has transformed how students access goods and resources. However, existing platforms often fail to address the specific needs of campus communities, where students face challenges such as financial constraints, lack of access to affordable goods, and inefficient resource circulation. This research proposes ShareSpace, a localized web application designed specifical…
▽ More
The rapid adoption of e-commerce has transformed how students access goods and resources. However, existing platforms often fail to address the specific needs of campus communities, where students face challenges such as financial constraints, lack of access to affordable goods, and inefficient resource circulation. This research proposes ShareSpace, a localized web application designed specifically for college students to facilitate the buying, and selling of mainly second-hand goods. By addressing imbalances like surplus items left behind by seniors and shortages experienced by juniors, ShareSpace promotes sustainability and affordability within the campus ecosystem. Leveraging modern technologies such as Node.js, React.js, and MongoDB, the project demonstrates the feasibility of creating a student-centric e-commerce solution. The study highlights how ShareSpace solves the challenges of economical pricing and content moderation using proposed solutions. This study also explores the limitations of existing solutions and evaluates the potential of ShareSpace to encourage sustainable consumption and resourcefulness among students.
△ Less
Submitted 18 November, 2024;
originally announced November 2024.
-
Fed-LDR: Federated Local Data-infused Graph Creation with Node-centric Model Refinement
Authors:
Jiechao Gao,
Yuangang Li,
Syeda Faiza Ahmed
Abstract:
The rapid acceleration of global urbanization has introduced novel challenges in enhancing urban infrastructure and services. Spatio-temporal data, integrating spatial and temporal dimensions, has emerged as a critical tool for understanding urban phenomena and promoting sustainability. In this context, Federated Learning (FL) has gained prominence as a distributed learning paradigm aligned with t…
▽ More
The rapid acceleration of global urbanization has introduced novel challenges in enhancing urban infrastructure and services. Spatio-temporal data, integrating spatial and temporal dimensions, has emerged as a critical tool for understanding urban phenomena and promoting sustainability. In this context, Federated Learning (FL) has gained prominence as a distributed learning paradigm aligned with the privacy requirements of urban IoT environments. However, integrating traditional and deep learning models into the FL framework poses significant challenges, particularly in capturing complex spatio-temporal dependencies and adapting to diverse urban conditions. To address these challenges, we propose the Federated Local Data-Infused Graph Creation with Node-centric Model Refinement (Fed-LDR) algorithm. Fed-LDR leverages FL and Graph Convolutional Networks (GCN) to enhance spatio-temporal data analysis in urban environments. The algorithm comprises two key modules: (1) the Local Data-Infused Graph Creation (LDIGC) module, which dynamically reconfigures adjacency matrices to reflect evolving spatial relationships within urban environments, and (2) the Node-centric Model Refinement (NoMoR) module, which customizes model parameters for individual urban nodes to accommodate heterogeneity. Evaluations on the PeMSD4 and PeMSD8 datasets demonstrate Fed-LDR's superior performance over six baseline methods. Fed-LDR achieved the lowest Mean Absolute Error (MAE) values of 20.15 and 17.30, and the lowest Root Mean Square Error (RMSE) values of 32.30 and 27.15, respectively, while maintaining a high correlation coefficient of 0.96 across both datasets. Notably, on the PeMSD4 dataset, Fed-LDR reduced MAE and RMSE by up to 81\% and 78\%, respectively, compared to the best-performing baseline FedMedian.
△ Less
Submitted 7 November, 2024;
originally announced November 2024.
-
Virtual Sensing-Enabled Digital Twin Framework for Real-Time Monitoring of Nuclear Systems Leveraging Deep Neural Operators
Authors:
Raisa Bentay Hossain,
Farid Ahmed,
Kazuma Kobayashi,
Seid Koric,
Diab Abueidda,
Syed Bahauddin Alam
Abstract:
Effective real-time monitoring is a foundation of digital twin technology, crucial for detecting material degradation and maintaining the structural integrity of nuclear systems to ensure both safety and operational efficiency. Traditional physical sensor systems face limitations such as installation challenges, high costs, and difficulty measuring critical parameters in hard-to-reach or harsh env…
▽ More
Effective real-time monitoring is a foundation of digital twin technology, crucial for detecting material degradation and maintaining the structural integrity of nuclear systems to ensure both safety and operational efficiency. Traditional physical sensor systems face limitations such as installation challenges, high costs, and difficulty measuring critical parameters in hard-to-reach or harsh environments, often resulting in incomplete data coverage. Machine learning-driven virtual sensors, integrated within a digital twin framework, offer a transformative solution by enhancing physical sensor capabilities to monitor critical degradation indicators like pressure, velocity, and turbulence. However, conventional machine learning models struggle with real-time monitoring due to the high-dimensional nature of reactor data and the need for frequent retraining. This paper introduces the use of Deep Operator Networks (DeepONet) as a core component of a digital twin framework to predict key thermal-hydraulic parameters in the hot leg of an AP-1000 Pressurized Water Reactor (PWR). DeepONet serves as a dynamic and scalable virtual sensor by accurately mapping the interplay between operational input parameters and spatially distributed system behaviors. In this study, DeepONet is trained with different operational conditions, which relaxes the requirement of continuous retraining, making it suitable for online and real-time prediction components for digital twin. Our results show that DeepONet achieves accurate predictions with low mean squared error and relative L2 error and can make predictions on unknown data 1400 times faster than traditional CFD simulations. This speed and accuracy enable DeepONet to synchronize with the physical system in real-time, functioning as a dynamic virtual sensor that tracks degradation-contributing conditions.
△ Less
Submitted 28 November, 2024; v1 submitted 17 October, 2024;
originally announced October 2024.
-
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks
Authors:
Kevin Eykholt,
Farhan Ahmed,
Pratik Vaishnavi,
Amir Rahmati
Abstract:
The vulnerability of machine learning models in adversarial scenarios has garnered significant interest in the academic community over the past decade, resulting in a myriad of attacks and defenses. However, while the community appears to be overtly successful in devising new attacks across new contexts, the development of defenses has stalled. After a decade of research, we appear no closer to se…
▽ More
The vulnerability of machine learning models in adversarial scenarios has garnered significant interest in the academic community over the past decade, resulting in a myriad of attacks and defenses. However, while the community appears to be overtly successful in devising new attacks across new contexts, the development of defenses has stalled. After a decade of research, we appear no closer to securing AI applications beyond additional training. Despite a lack of effective mitigations, AI development and its incorporation into existing systems charge full speed ahead with the rise of generative AI and large language models. Will our ineffectiveness in developing solutions to adversarial threats further extend to these new technologies?
In this paper, we argue that overly permissive attack and overly restrictive defensive threat models have hampered defense development in the ML domain. Through the lens of adversarial evasion attacks against neural networks, we critically examine common attack assumptions, such as the ability to bypass any defense not explicitly built into the model. We argue that these flawed assumptions, seen as reasonable by the community based on paper acceptance, have encouraged the development of adversarial attacks that map poorly to real-world scenarios. In turn, new defenses evaluated against these very attacks are inadvertently required to be almost perfect and incorporated as part of the model. But do they need to? In practice, machine learning models are deployed as a small component of a larger system. We analyze adversarial machine learning from a system security perspective rather than an AI perspective and its implications for emerging AI paradigms.
△ Less
Submitted 15 October, 2024;
originally announced October 2024.
-
Mitigating Hallucinations Using Ensemble of Knowledge Graph and Vector Store in Large Language Models to Enhance Mental Health Support
Authors:
Abdul Muqtadir,
Hafiz Syed Muhammad Bilal,
Ayesha Yousaf,
Hafiz Farooq Ahmed,
Jamil Hussain
Abstract:
This research work delves into the manifestation of hallucination within Large Language Models (LLMs) and its consequential impacts on applications within the domain of mental health. The primary objective is to discern effective strategies for curtailing hallucinatory occurrences, thereby bolstering the dependability and security of LLMs in facilitating mental health interventions such as therapy…
▽ More
This research work delves into the manifestation of hallucination within Large Language Models (LLMs) and its consequential impacts on applications within the domain of mental health. The primary objective is to discern effective strategies for curtailing hallucinatory occurrences, thereby bolstering the dependability and security of LLMs in facilitating mental health interventions such as therapy, counseling, and the dissemination of pertinent information. Through rigorous investigation and analysis, this study seeks to elucidate the underlying mechanisms precipitating hallucinations in LLMs and subsequently propose targeted interventions to alleviate their occurrence. By addressing this critical issue, the research endeavors to foster a more robust framework for the utilization of LLMs within mental health contexts, ensuring their efficacy and reliability in aiding therapeutic processes and delivering accurate information to individuals seeking mental health support.
△ Less
Submitted 6 October, 2024;
originally announced October 2024.
-
DICE: Discrete Inversion Enabling Controllable Editing for Multinomial Diffusion and Masked Generative Models
Authors:
Xiaoxiao He,
Ligong Han,
Quan Dao,
Song Wen,
Minhao Bai,
Di Liu,
Han Zhang,
Martin Renqiang Min,
Felix Juefei-Xu,
Chaowei Tan,
Bo Liu,
Kang Li,
Hongdong Li,
Junzhou Huang,
Faez Ahmed,
Akash Srivastava,
Dimitris Metaxas
Abstract:
Discrete diffusion models have achieved success in tasks like image generation and masked language modeling but face limitations in controlled content editing. We introduce DICE (Discrete Inversion for Controllable Editing), the first approach to enable precise inversion for discrete diffusion models, including multinomial diffusion and masked generative models. By recording noise sequences and ma…
▽ More
Discrete diffusion models have achieved success in tasks like image generation and masked language modeling but face limitations in controlled content editing. We introduce DICE (Discrete Inversion for Controllable Editing), the first approach to enable precise inversion for discrete diffusion models, including multinomial diffusion and masked generative models. By recording noise sequences and masking patterns during the reverse diffusion process, DICE enables accurate reconstruction and flexible editing of discrete data without the need for predefined masks or attention manipulation. We demonstrate the effectiveness of DICE across both image and text domains, evaluating it on models such as VQ-Diffusion, Paella, and RoBERTa. Our results show that DICE preserves high data fidelity while enhancing editing capabilities, offering new opportunities for fine-grained content manipulation in discrete spaces.
△ Less
Submitted 31 March, 2025; v1 submitted 10 October, 2024;
originally announced October 2024.
-
Deep Learning for Surgical Instrument Recognition and Segmentation in Robotic-Assisted Surgeries: A Systematic Review
Authors:
Fatimaelzahraa Ali Ahmed,
Mahmoud Yousef,
Mariam Ali Ahmed,
Hasan Omar Ali,
Anns Mahboob,
Hazrat Ali,
Zubair Shah,
Omar Aboumarzouk,
Abdulla Al Ansari,
Shidin Balakrishnan
Abstract:
Applying deep learning (DL) for annotating surgical instruments in robot-assisted minimally invasive surgeries (MIS) represents a significant advancement in surgical technology. This systematic review examines 48 studies that and advanced DL methods and architectures. These sophisticated DL models have shown notable improvements in the precision and efficiency of detecting and segmenting surgical…
▽ More
Applying deep learning (DL) for annotating surgical instruments in robot-assisted minimally invasive surgeries (MIS) represents a significant advancement in surgical technology. This systematic review examines 48 studies that and advanced DL methods and architectures. These sophisticated DL models have shown notable improvements in the precision and efficiency of detecting and segmenting surgical tools. The enhanced capabilities of these models support various clinical applications, including real-time intraoperative guidance, comprehensive postoperative evaluations, and objective assessments of surgical skills. By accurately identifying and segmenting surgical instruments in video data, DL models provide detailed feedback to surgeons, thereby improving surgical outcomes and reducing complication risks. Furthermore, the application of DL in surgical education is transformative. The review underscores the significant impact of DL on improving the accuracy of skill assessments and the overall quality of surgical training programs. However, implementing DL in surgical tool detection and segmentation faces challenges, such as the need for large, accurately annotated datasets to train these models effectively. The manual annotation process is labor-intensive and time-consuming, posing a significant bottleneck. Future research should focus on automating the detection and segmentation process and enhancing the robustness of DL models against environmental variations. Expanding the application of DL models across various surgical specialties will be essential to fully realize this technology's potential. Integrating DL with other emerging technologies, such as augmented reality (AR), also offers promising opportunities to further enhance the precision and efficacy of surgical procedures.
△ Less
Submitted 7 November, 2024; v1 submitted 9 October, 2024;
originally announced October 2024.
-
GenCAD: Image-Conditioned Computer-Aided Design Generation with Transformer-Based Contrastive Representation and Diffusion Priors
Authors:
Md Ferdous Alam,
Faez Ahmed
Abstract:
The creation of manufacturable and editable 3D shapes through Computer-Aided Design (CAD) remains a highly manual and time-consuming task, hampered by the complex topology of boundary representations of 3D solids and unintuitive design tools. While most work in the 3D shape generation literature focuses on representations like meshes, voxels, or point clouds, practical engineering applications dem…
▽ More
The creation of manufacturable and editable 3D shapes through Computer-Aided Design (CAD) remains a highly manual and time-consuming task, hampered by the complex topology of boundary representations of 3D solids and unintuitive design tools. While most work in the 3D shape generation literature focuses on representations like meshes, voxels, or point clouds, practical engineering applications demand the modifiability and manufacturability of CAD models and the ability for multi-modal conditional CAD model generation. This paper introduces GenCAD, a generative model that employs autoregressive transformers with a contrastive learning framework and latent diffusion models to transform image inputs into parametric CAD command sequences, resulting in editable 3D shape representations. Extensive evaluations demonstrate that GenCAD significantly outperforms existing state-of-the-art methods in terms of the unconditional and conditional generations of CAD models. Additionally, the contrastive learning framework of GenCAD facilitates the retrieval of CAD models using image queries from large CAD databases, which is a critical challenge within the CAD community. Our results provide a significant step forward in highlighting the potential of generative models to expedite the entire design-to-production pipeline and seamlessly integrate different design modalities.
△ Less
Submitted 8 April, 2025; v1 submitted 8 September, 2024;
originally announced September 2024.
-
A study on Deep Convolutional Neural Networks, Transfer Learning and Ensemble Model for Breast Cancer Detection
Authors:
Md Taimur Ahad,
Sumaya Mustofa,
Faruk Ahmed,
Yousuf Rayhan Emon,
Aunirudra Dey Anu
Abstract:
In deep learning, transfer learning and ensemble models have shown promise in improving computer-aided disease diagnosis. However, applying the transfer learning and ensemble model is still relatively limited. Moreover, the ensemble model's development is ad-hoc, overlooks redundant layers, and suffers from imbalanced datasets and inadequate augmentation. Lastly, significant Deep Convolutional Neu…
▽ More
In deep learning, transfer learning and ensemble models have shown promise in improving computer-aided disease diagnosis. However, applying the transfer learning and ensemble model is still relatively limited. Moreover, the ensemble model's development is ad-hoc, overlooks redundant layers, and suffers from imbalanced datasets and inadequate augmentation. Lastly, significant Deep Convolutional Neural Networks (D-CNNs) have been introduced to detect and classify breast cancer. Still, very few comparative studies were conducted to investigate the accuracy and efficiency of existing CNN architectures. Realising the gaps, this study compares the performance of D-CNN, which includes the original CNN, transfer learning, and an ensemble model, in detecting breast cancer. The comparison study of this paper consists of comparison using six CNN-based deep learning architectures (SE-ResNet152, MobileNetV2, VGG19, ResNet18, InceptionV3, and DenseNet-121), a transfer learning, and an ensemble model on breast cancer detection. Among the comparison of these models, the ensemble model provides the highest detection and classification accuracy of 99.94% for breast cancer detection and classification. However, this study also provides a negative result in the case of transfer learning, as the transfer learning did not increase the accuracy of the original SE-ResNet152, MobileNetV2, VGG19, ResNet18, InceptionV3, and DenseNet-121 model. The high accuracy in detecting and categorising breast cancer detection using CNN suggests that the CNN model is promising in breast cancer disease detection. This research is significant in biomedical engineering, computer-aided disease diagnosis, and ML-based disease detection.
△ Less
Submitted 10 September, 2024;
originally announced September 2024.
-
Multi-Class Plant Leaf Disease Detection: A CNN-based Approach with Mobile App Integration
Authors:
Md Aziz Hosen Foysal,
Foyez Ahmed,
Md Zahurul Haque
Abstract:
Plant diseases significantly impact agricultural productivity, resulting in economic losses and food insecurity. Prompt and accurate detection is crucial for the efficient management and mitigation of plant diseases. This study investigates advanced techniques in plant disease detection, emphasizing the integration of image processing, machine learning, deep learning methods, and mobile technologi…
▽ More
Plant diseases significantly impact agricultural productivity, resulting in economic losses and food insecurity. Prompt and accurate detection is crucial for the efficient management and mitigation of plant diseases. This study investigates advanced techniques in plant disease detection, emphasizing the integration of image processing, machine learning, deep learning methods, and mobile technologies. High-resolution images of plant leaves were captured and analyzed using convolutional neural networks (CNNs) to detect symptoms of various diseases, such as blight, mildew, and rust. This study explores 14 classes of plants and diagnoses 26 unique plant diseases. We focus on common diseases affecting various crops. The model was trained on a diverse dataset encompassing multiple crops and disease types, achieving 98.14% accuracy in disease diagnosis. Finally integrated this model into mobile apps for real-time disease diagnosis.
△ Less
Submitted 26 August, 2024;
originally announced August 2024.
-
Prompting for products: Investigating design space exploration strategies for text-to-image generative models
Authors:
Leah Chong,
I-Ping Lo,
Jude Rayan,
Steven Dow,
Faez Ahmed,
Ioanna Lykourentzou
Abstract:
Text-to-image models are enabling efficient design space exploration, rapidly generating images from text prompts. However, many generative AI tools are imperfect for product design applications as they are not built for the goals and requirements of product design. The unclear link between text input and image output further complicates their application. This work empirically investigates design…
▽ More
Text-to-image models are enabling efficient design space exploration, rapidly generating images from text prompts. However, many generative AI tools are imperfect for product design applications as they are not built for the goals and requirements of product design. The unclear link between text input and image output further complicates their application. This work empirically investigates design space exploration strategies that can successfully yield product images that are feasible, novel, and aesthetic, which are three common goals in product design. Specifically, user actions within the global and local editing modes, including their time spent, prompt length, mono vs. multi-criteria prompts, and goal orientation of prompts, are analyzed. Key findings reveal the pivotal role of mono vs. multi-criteria and goal orientation of prompts in achieving specific design goals over time and prompt length. The study recommends prioritizing the use of multi-criteria prompts for feasibility and novelty during global editing, while favoring mono-criteria prompts for aesthetics during local editing. Overall, this paper underscores the nuanced relationship between the AI-driven text-to-image models and their effectiveness in product design, urging designers to carefully structure prompts during different editing modes to better meet the unique demands of product design.
△ Less
Submitted 22 July, 2024;
originally announced August 2024.
-
Turning Generative Models Degenerate: The Power of Data Poisoning Attacks
Authors:
Shuli Jiang,
Swanand Ravindra Kadhe,
Yi Zhou,
Farhan Ahmed,
Ling Cai,
Nathalie Baracaldo
Abstract:
The increasing use of large language models (LLMs) trained by third parties raises significant security concerns. In particular, malicious actors can introduce backdoors through poisoning attacks to generate undesirable outputs. While such attacks have been extensively studied in image domains and classification tasks, they remain underexplored for natural language generation (NLG) tasks. To addre…
▽ More
The increasing use of large language models (LLMs) trained by third parties raises significant security concerns. In particular, malicious actors can introduce backdoors through poisoning attacks to generate undesirable outputs. While such attacks have been extensively studied in image domains and classification tasks, they remain underexplored for natural language generation (NLG) tasks. To address this gap, we conduct an investigation of various poisoning techniques targeting the LLM's fine-tuning phase via prefix-tuning, a Parameter Efficient Fine-Tuning (PEFT) method. We assess their effectiveness across two generative tasks: text summarization and text completion; and we also introduce new metrics to quantify the success and stealthiness of such NLG poisoning attacks. Through our experiments, we find that the prefix-tuning hyperparameters and trigger designs are the most crucial factors to influence attack success and stealthiness. Moreover, we demonstrate that existing popular defenses are ineffective against our poisoning attacks. Our study presents the first systematic approach to understanding poisoning attacks targeting NLG tasks during fine-tuning via PEFT across a wide range of triggers and attack settings. We hope our findings will aid the AI security community in developing effective defenses against such threats.
△ Less
Submitted 18 July, 2024; v1 submitted 16 July, 2024;
originally announced July 2024.
-
CAD-Prompted Generative Models: A Pathway to Feasible and Novel Engineering Designs
Authors:
Leah Chong,
Jude Rayan,
Steven Dow,
Ioanna Lykourentzou,
Faez Ahmed
Abstract:
Text-to-image generative models have increasingly been used to assist designers during concept generation in various creative domains, such as graphic design, user interface design, and fashion design. However, their applications in engineering design remain limited due to the models' challenges in generating images of feasible designs concepts. To address this issue, this paper introduces a metho…
▽ More
Text-to-image generative models have increasingly been used to assist designers during concept generation in various creative domains, such as graphic design, user interface design, and fashion design. However, their applications in engineering design remain limited due to the models' challenges in generating images of feasible designs concepts. To address this issue, this paper introduces a method that improves the design feasibility by prompting the generation with feasible CAD images. In this work, the usefulness of this method is investigated through a case study with a bike design task using an off-the-shelf text-to-image model, Stable Diffusion 2.1. A diverse set of bike designs are produced in seven different generation settings with varying CAD image prompting weights, and these designs are evaluated on their perceived feasibility and novelty. Results demonstrate that the CAD image prompting successfully helps text-to-image models like Stable Diffusion 2.1 create visibly more feasible design images. While a general tradeoff is observed between feasibility and novelty, when the prompting weight is kept low around 0.35, the design feasibility is significantly improved while its novelty remains on par with those generated by text prompts alone. The insights from this case study offer some guidelines for selecting the appropriate CAD image prompting weight for different stages of the engineering design process. When utilized effectively, our CAD image prompting method opens doors to a wider range of applications of text-to-image models in engineering design.
△ Less
Submitted 22 July, 2024; v1 submitted 11 July, 2024;
originally announced July 2024.