-
VLA-Touch: Enhancing Vision-Language-Action Models with Dual-Level Tactile Feedback
Authors:
Jianxin Bi,
Kevin Yuchen Ma,
Ce Hao,
Mike Zheng Shou,
Harold Soh
Abstract:
Tactile feedback is generally recognized to be crucial for effective interaction with the physical world. However, state-of-the-art Vision-Language-Action (VLA) models lack the ability to interpret and use tactile signals, limiting their effectiveness in contact-rich tasks. Incorporating tactile feedback into these systems is challenging due to the absence of large multi-modal datasets. We present…
▽ More
Tactile feedback is generally recognized to be crucial for effective interaction with the physical world. However, state-of-the-art Vision-Language-Action (VLA) models lack the ability to interpret and use tactile signals, limiting their effectiveness in contact-rich tasks. Incorporating tactile feedback into these systems is challenging due to the absence of large multi-modal datasets. We present VLA-Touch, an approach that enhances generalist robot policies with tactile sensing \emph{without fine-tuning} the base VLA. Our method introduces two key innovations: (1) a pipeline that leverages a pretrained tactile-language model that provides semantic tactile feedback for high-level task planning, and (2) a diffusion-based controller that refines VLA-generated actions with tactile signals for contact-rich manipulation. Through real-world experiments, we demonstrate that our dual-level integration of tactile feedback improves task planning efficiency while enhancing execution precision. Code is open-sourced at \href{https://github.com/jxbi1010/VLA-Touch}{this URL}.
△ Less
Submitted 23 July, 2025;
originally announced July 2025.
-
Controllable Video Generation: A Survey
Authors:
Yue Ma,
Kunyu Feng,
Zhongyuan Hu,
Xinyu Wang,
Yucheng Wang,
Mingzhe Zheng,
Xuanhua He,
Chenyang Zhu,
Hongyu Liu,
Yingqing He,
Zeyu Wang,
Zhifeng Li,
Xiu Li,
Wei Liu,
Dan Xu,
Linfeng Zhang,
Qifeng Chen
Abstract:
With the rapid development of AI-generated content (AIGC), video generation has emerged as one of its most dynamic and impactful subfields. In particular, the advancement of video generation foundation models has led to growing demand for controllable video generation methods that can more accurately reflect user intent. Most existing foundation models are designed for text-to-video generation, wh…
▽ More
With the rapid development of AI-generated content (AIGC), video generation has emerged as one of its most dynamic and impactful subfields. In particular, the advancement of video generation foundation models has led to growing demand for controllable video generation methods that can more accurately reflect user intent. Most existing foundation models are designed for text-to-video generation, where text prompts alone are often insufficient to express complex, multi-modal, and fine-grained user requirements. This limitation makes it challenging for users to generate videos with precise control using current models. To address this issue, recent research has explored the integration of additional non-textual conditions, such as camera motion, depth maps, and human pose, to extend pretrained video generation models and enable more controllable video synthesis. These approaches aim to enhance the flexibility and practical applicability of AIGC-driven video generation systems. In this survey, we provide a systematic review of controllable video generation, covering both theoretical foundations and recent advances in the field. We begin by introducing the key concepts and commonly used open-source video generation models. We then focus on control mechanisms in video diffusion models, analyzing how different types of conditions can be incorporated into the denoising process to guide generation. Finally, we categorize existing methods based on the types of control signals they leverage, including single-condition generation, multi-condition generation, and universal controllable generation. For a complete list of the literature on controllable video generation reviewed, please visit our curated repository at https://github.com/mayuelala/Awesome-Controllable-Video-Generation.
△ Less
Submitted 22 July, 2025;
originally announced July 2025.
-
Step-Audio 2 Technical Report
Authors:
Boyong Wu,
Chao Yan,
Chen Hu,
Cheng Yi,
Chengli Feng,
Fei Tian,
Feiyu Shen,
Gang Yu,
Haoyang Zhang,
Jingbei Li,
Mingrui Chen,
Peng Liu,
Wang You,
Xiangyu Tony Zhang,
Xingyuan Li,
Xuerui Yang,
Yayue Deng,
Yechang Huang,
Yuxin Li,
Yuxin Zhang,
Zhao You,
Brian Li,
Changyi Wan,
Hanpeng Hu,
Jiangjie Zhen
, et al. (84 additional authors not shown)
Abstract:
This paper presents Step-Audio~2, an end-to-end multi-modal large language model designed for industry-strength audio understanding and speech conversation. By integrating a latent audio encoder and reasoning-centric reinforcement learning (RL), Step-Audio 2 achieves promising performance in automatic speech recognition (ASR) and audio understanding. To facilitate genuine end-to-end speech convers…
▽ More
This paper presents Step-Audio~2, an end-to-end multi-modal large language model designed for industry-strength audio understanding and speech conversation. By integrating a latent audio encoder and reasoning-centric reinforcement learning (RL), Step-Audio 2 achieves promising performance in automatic speech recognition (ASR) and audio understanding. To facilitate genuine end-to-end speech conversation, Step-Audio 2 incorporates the generation of discrete audio tokens into language modeling, significantly enhancing its responsiveness to paralinguistic information such as speaking styles and emotions. To effectively leverage the rich textual and acoustic knowledge in real-world data, Step-Audio 2 integrates retrieval-augmented generation (RAG) and is able to call external tools such as web search to mitigate hallucination and audio search to switch timbres. Trained on millions of hours of speech and audio data, Step-Audio 2 delivers intelligence and expressiveness across diverse conversational scenarios. Evaluation results demonstrate that Step-Audio 2 achieves state-of-the-art performance on various audio understanding and conversational benchmarks compared to other open-source and commercial solutions. Please visit https://github.com/stepfun-ai/Step-Audio2 for more information.
△ Less
Submitted 22 July, 2025;
originally announced July 2025.
-
Robust Noisy Pseudo-label Learning for Semi-supervised Medical Image Segmentation Using Diffusion Model
Authors:
Lin Xi,
Yingliang Ma,
Cheng Wang,
Sandra Howell,
Aldo Rinaldi,
Kawal S. Rhode
Abstract:
Obtaining pixel-level annotations in the medical domain is both expensive and time-consuming, often requiring close collaboration between clinical experts and developers. Semi-supervised medical image segmentation aims to leverage limited annotated data alongside abundant unlabeled data to achieve accurate segmentation. However, existing semi-supervised methods often struggle to structure semantic…
▽ More
Obtaining pixel-level annotations in the medical domain is both expensive and time-consuming, often requiring close collaboration between clinical experts and developers. Semi-supervised medical image segmentation aims to leverage limited annotated data alongside abundant unlabeled data to achieve accurate segmentation. However, existing semi-supervised methods often struggle to structure semantic distributions in the latent space due to noise introduced by pseudo-labels. In this paper, we propose a novel diffusion-based framework for semi-supervised medical image segmentation. Our method introduces a constraint into the latent structure of semantic labels during the denoising diffusion process by enforcing prototype-based contrastive consistency. Rather than explicitly delineating semantic boundaries, the model leverages class prototypes centralized semantic representations in the latent space as anchors. This strategy improves the robustness of dense predictions, particularly in the presence of noisy pseudo-labels. We also introduce a new publicly available benchmark: Multi-Object Segmentation in X-ray Angiography Videos (MOSXAV), which provides detailed, manually annotated segmentation ground truth for multiple anatomical structures in X-ray angiography videos. Extensive experiments on the EndoScapes2023 and MOSXAV datasets demonstrate that our method outperforms state-of-the-art medical image segmentation approaches under the semi-supervised learning setting. This work presents a robust and data-efficient diffusion model that offers enhanced flexibility and strong potential for a wide range of clinical applications.
△ Less
Submitted 22 July, 2025;
originally announced July 2025.
-
PromptAL: Sample-Aware Dynamic Soft Prompts for Few-Shot Active Learning
Authors:
Hui Xiang,
Jinqiao Shi,
Ting Zhang,
Xiaojie Zhao,
Yong Liu,
Yong Ma
Abstract:
Active learning (AL) aims to optimize model training and reduce annotation costs by selecting the most informative samples for labeling. Typically, AL methods rely on the empirical distribution of labeled data to define the decision boundary and perform uncertainty or diversity estimation, subsequently identifying potential high-quality samples. In few-shot scenarios, the empirical distribution of…
▽ More
Active learning (AL) aims to optimize model training and reduce annotation costs by selecting the most informative samples for labeling. Typically, AL methods rely on the empirical distribution of labeled data to define the decision boundary and perform uncertainty or diversity estimation, subsequently identifying potential high-quality samples. In few-shot scenarios, the empirical distribution often diverges significantly from the target distribution, causing the decision boundary to shift away from its optimal position. However, existing methods overlook the role of unlabeled samples in enhancing the empirical distribution to better align with the target distribution, resulting in a suboptimal decision boundary and the selection of samples that inadequately represent the target distribution. To address this, we propose a hybrid AL framework, termed \textbf{PromptAL} (Sample-Aware Dynamic Soft \textbf{Prompts} for Few-Shot \textbf{A}ctive \textbf{L}earning). This framework accounts for the contribution of each unlabeled data point in aligning the current empirical distribution with the target distribution, thereby optimizing the decision boundary. Specifically, PromptAL first leverages unlabeled data to construct sample-aware dynamic soft prompts that adjust the model's predictive distribution and decision boundary. Subsequently, based on the adjusted decision boundary, it integrates uncertainty estimation with both global and local diversity to select high-quality samples that more accurately represent the target distribution. Experimental results on six in-domain and three out-of-domain datasets show that PromptAL achieves superior performance over nine baselines. Our codebase is openly accessible.
△ Less
Submitted 22 July, 2025;
originally announced July 2025.
-
Learning Patient-Specific Spatial Biomarker Dynamics via Operator Learning for Alzheimer's Disease Progression
Authors:
Jindong Wang,
Yutong Mao,
Xiao Liu,
Wenrui Hao
Abstract:
Alzheimer's disease (AD) is a complex, multifactorial neurodegenerative disorder with substantial heterogeneity in progression and treatment response. Despite recent therapeutic advances, predictive models capable of accurately forecasting individualized disease trajectories remain limited. Here, we present a machine learning-based operator learning framework for personalized modeling of AD progre…
▽ More
Alzheimer's disease (AD) is a complex, multifactorial neurodegenerative disorder with substantial heterogeneity in progression and treatment response. Despite recent therapeutic advances, predictive models capable of accurately forecasting individualized disease trajectories remain limited. Here, we present a machine learning-based operator learning framework for personalized modeling of AD progression, integrating longitudinal multimodal imaging, biomarker, and clinical data. Unlike conventional models with prespecified dynamics, our approach directly learns patient-specific disease operators governing the spatiotemporal evolution of amyloid, tau, and neurodegeneration biomarkers. Using Laplacian eigenfunction bases, we construct geometry-aware neural operators capable of capturing complex brain dynamics. Embedded within a digital twin paradigm, the framework enables individualized predictions, simulation of therapeutic interventions, and in silico clinical trials. Applied to AD clinical data, our method achieves high prediction accuracy exceeding 90% across multiple biomarkers, substantially outperforming existing approaches. This work offers a scalable, interpretable platform for precision modeling and personalized therapeutic optimization in neurodegenerative diseases.
△ Less
Submitted 21 July, 2025;
originally announced July 2025.
-
Interaction as Intelligence: Deep Research With Human-AI Partnership
Authors:
Lyumanshan Ye,
Xiaojie Cai,
Xinkai Wang,
Junfei Wang,
Xiangkun Hu,
Jiadi Su,
Yang Nan,
Sihan Wang,
Bohan Zhang,
Xiaoze Fan,
Jinbin Luo,
Yuxiang Zheng,
Tianze Xu,
Dayuan Fu,
Yunze Wu,
Pengrui Lu,
Zengzhi Wang,
Yiwei Qin,
Zhen Huang,
Yan Ma,
Zhulin Hu,
Haoyang Zou,
Tiantian Mi,
Yixin Ye,
Ethan Chern
, et al. (1 additional authors not shown)
Abstract:
This paper introduces "Interaction as Intelligence" research series, presenting a reconceptualization of human-AI relationships in deep research tasks. Traditional approaches treat interaction merely as an interface for accessing AI capabilities-a conduit between human intent and machine output. We propose that interaction itself constitutes a fundamental dimension of intelligence. As AI systems e…
▽ More
This paper introduces "Interaction as Intelligence" research series, presenting a reconceptualization of human-AI relationships in deep research tasks. Traditional approaches treat interaction merely as an interface for accessing AI capabilities-a conduit between human intent and machine output. We propose that interaction itself constitutes a fundamental dimension of intelligence. As AI systems engage in extended thinking processes for research tasks, meaningful interaction transitions from an optional enhancement to an essential component of effective intelligence. Current deep research systems adopt an "input-wait-output" paradigm where users initiate queries and receive results after black-box processing. This approach leads to error cascade effects, inflexible research boundaries that prevent question refinement during investigation, and missed opportunities for expertise integration. To address these limitations, we introduce Deep Cognition, a system that transforms the human role from giving instructions to cognitive oversight-a mode of engagement where humans guide AI thinking processes through strategic intervention at critical junctures. Deep cognition implements three key innovations: (1)Transparent, controllable, and interruptible interaction that reveals AI reasoning and enables intervention at any point; (2)Fine-grained bidirectional dialogue; and (3)Shared cognitive context where the system observes and adapts to user behaviors without explicit instruction. User evaluation demonstrates that this cognitive oversight paradigm outperforms the strongest baseline across six key metrics: Transparency(+20.0%), Fine-Grained Interaction(+29.2%), Real-Time Intervention(+18.5%), Ease of Collaboration(+27.7%), Results-Worth-Effort(+8.8%), and Interruptibility(+20.7%). Evaluations on challenging research problems show 31.8% to 50.0% points of improvements over deep research systems.
△ Less
Submitted 21 July, 2025;
originally announced July 2025.
-
Uncertainty-aware Probabilistic 3D Human Motion Forecasting via Invertible Networks
Authors:
Yue Ma,
Kanglei Zhou,
Fuyang Yu,
Frederick W. B. Li,
Xiaohui Liang
Abstract:
3D human motion forecasting aims to enable autonomous applications. Estimating uncertainty for each prediction (i.e., confidence based on probability density or quantile) is essential for safety-critical contexts like human-robot collaboration to minimize risks. However, existing diverse motion forecasting approaches struggle with uncertainty quantification due to implicit probabilistic representa…
▽ More
3D human motion forecasting aims to enable autonomous applications. Estimating uncertainty for each prediction (i.e., confidence based on probability density or quantile) is essential for safety-critical contexts like human-robot collaboration to minimize risks. However, existing diverse motion forecasting approaches struggle with uncertainty quantification due to implicit probabilistic representations hindering uncertainty modeling. We propose ProbHMI, which introduces invertible networks to parameterize poses in a disentangled latent space, enabling probabilistic dynamics modeling. A forecasting module then explicitly predicts future latent distributions, allowing effective uncertainty quantification. Evaluated on benchmarks, ProbHMI achieves strong performance for both deterministic and diverse prediction while validating uncertainty calibration, critical for risk-aware decision making.
△ Less
Submitted 19 July, 2025;
originally announced July 2025.
-
UAV-Enabled Wireless-Powered Underground Communication Networks: A Novel Time Allocation Approach
Authors:
Kaiqiang Lin,
Yijie Mao,
Onel Luis Alcaraz López,
Mohamed-Slim Alouini
Abstract:
Wireless-powered underground communication networks (WPUCNs), which allow underground devices (UDs) to harvest energy from wireless signals for battery-free communication, offer a promising solution for sustainable underground monitoring. However, the severe wireless signal attenuation in challenging underground environments and the costly acquisition of channel state information (CSI) make large-…
▽ More
Wireless-powered underground communication networks (WPUCNs), which allow underground devices (UDs) to harvest energy from wireless signals for battery-free communication, offer a promising solution for sustainable underground monitoring. However, the severe wireless signal attenuation in challenging underground environments and the costly acquisition of channel state information (CSI) make large-scale WPUCNs economically infeasible in practice. To address this challenge, we introduce flexible unmanned aerial vehicles (UAVs) into WPUCNs, leading to UAV-enabled WPUCN systems. In this system, a UAV is first charged by a terrestrial hybrid access point (HAP), then flies to the monitoring area to wirelessly charge UDs. Afterwards, the UAV collects data from the UDs and finally returns to the HAP for data offloading. Based on the proposed UAV-enabled WPUCN system, we first propose its energy consumption model and a hybrid wireless energy transfer (WET) approach (i.e., UDs can harvest energy from both the HAP and the UAV) relying on full-CSI and CSI-free multi-antenna beamforming. Then, we formulate and address a time allocation problem to minimize the energy consumption of UAV, while ensuring that the throughput requirements of all UDs are met and all sensor data is offloaded. Through simulations of a realistic farming scenario, we demonstrate that the proposed hybrid WET approach outperforms other WET approaches, with performance gains influenced by the number of antennas, communication distance, number of UDs, and underground conditions. Additionally, under the optimized time allocation, we found that the proposed hybrid WET approach based on a CSI-free multi-antenna scheme achieves the lowest UAV's energy consumption among all WET mechanisms, thereby enabling sustainable underground monitoring in WPUCNs.
△ Less
Submitted 19 July, 2025;
originally announced July 2025.
-
Routine: A Structural Planning Framework for LLM Agent System in Enterprise
Authors:
Guancheng Zeng,
Xueyi Chen,
Jiawang Hu,
Shaohua Qi,
Yaxuan Mao,
Zhantao Wang,
Yifan Nie,
Shuang Li,
Qiuyang Feng,
Pengxu Qiu,
Yujia Wang,
Wenqiang Han,
Linyan Huang,
Gang Li,
Jingjing Mo,
Haowen Hu
Abstract:
The deployment of agent systems in an enterprise environment is often hindered by several challenges: common models lack domain-specific process knowledge, leading to disorganized plans, missing key tools, and poor execution stability. To address this, this paper introduces Routine, a multi-step agent planning framework designed with a clear structure, explicit instructions, and seamless parameter…
▽ More
The deployment of agent systems in an enterprise environment is often hindered by several challenges: common models lack domain-specific process knowledge, leading to disorganized plans, missing key tools, and poor execution stability. To address this, this paper introduces Routine, a multi-step agent planning framework designed with a clear structure, explicit instructions, and seamless parameter passing to guide the agent's execution module in performing multi-step tool-calling tasks with high stability. In evaluations conducted within a real-world enterprise scenario, Routine significantly increases the execution accuracy in model tool calls, increasing the performance of GPT-4o from 41.1% to 96.3%, and Qwen3-14B from 32.6% to 83.3%. We further constructed a Routine-following training dataset and fine-tuned Qwen3-14B, resulting in an accuracy increase to 88.2% on scenario-specific evaluations, indicating improved adherence to execution plans. In addition, we employed Routine-based distillation to create a scenario-specific, multi-step tool-calling dataset. Fine-tuning on this distilled dataset raised the model's accuracy to 95.5%, approaching GPT-4o's performance. These results highlight Routine's effectiveness in distilling domain-specific tool-usage patterns and enhancing model adaptability to new scenarios. Our experimental results demonstrate that Routine provides a practical and accessible approach to building stable agent workflows, accelerating the deployment and adoption of agent systems in enterprise environments, and advancing the technical vision of AI for Process.
△ Less
Submitted 22 July, 2025; v1 submitted 18 July, 2025;
originally announced July 2025.
-
Quantum-Safe Identity Verification using Relativistic Zero-Knowledge Proof Systems
Authors:
Yao Ma,
Wen Yu Kon,
Jefferson Chu,
Kevin Han Yong Loh,
Kaushik Chakraborty,
Charles Lim
Abstract:
Identity verification is the process of confirming an individual's claimed identity, which is essential in sectors like finance, healthcare, and online services to ensure security and prevent fraud. However, current password/PIN-based identity solutions are susceptible to phishing or skimming attacks, where malicious intermediaries attempt to steal credentials using fake identification portals. Al…
▽ More
Identity verification is the process of confirming an individual's claimed identity, which is essential in sectors like finance, healthcare, and online services to ensure security and prevent fraud. However, current password/PIN-based identity solutions are susceptible to phishing or skimming attacks, where malicious intermediaries attempt to steal credentials using fake identification portals. Alikhani et al. [Nature, 2021] began exploring identity verification through graph coloring-based relativistic zero-knowledge proofs (RZKPs), a key cryptographic primitive that enables a prover to demonstrate knowledge of secret credentials to a verifier without disclosing any information about the secret. Our work advances this field and addresses unresolved issues: From an engineering perspective, we relax further the relativistic constraints from 60m to 30m, and significantly enhance the stability and scalability of the experimental demonstration of the 2-prover graph coloring-based RZKP protocol for near-term use cases. At the same time, for long-term security against entangled malicious provers, we propose a modified protocol with comparable computation and communication costs, we establish an upper bound on the soundness parameter for this modified protocol. On the other hand, we extend the two-prover, two-verifier setup to a three-prover configuration, demonstrating the security of such relativistic protocols against entangled malicious provers.
△ Less
Submitted 18 July, 2025;
originally announced July 2025.
-
ATRO: A Fast Solver-Free Algorithm for Topology and Routing Optimization of Reconfigurable Datacenter Networks
Authors:
Yingming Mao,
Qiaozhu Zhai,
Zhen Yao,
Xia Zhu,
Ximeng Liu,
Xinchi Han
Abstract:
The growing scale and complexity of reconfigurable data center networks (DCNs) demand more scalable and efficient algorithms for computing logical topologies and routing. Reconfigurable DCNs typically operate in two modes: one-hop configurations that require frequent topology optimization (TO), and multi-hop scenarios that involve joint topology and routing optimization (TRO). In both cases, the c…
▽ More
The growing scale and complexity of reconfigurable data center networks (DCNs) demand more scalable and efficient algorithms for computing logical topologies and routing. Reconfigurable DCNs typically operate in two modes: one-hop configurations that require frequent topology optimization (TO), and multi-hop scenarios that involve joint topology and routing optimization (TRO). In both cases, the combinatorial nature of topology decisions makes it difficult for existing methods to balance solution quality and runtime efficiency. To address this, we introduce Alternating Topology and Routing Optimization (ATRO), a solver-free framework that alternates between TO and routing optimization (RO). This decomposition exploits two key insights: first, each alternating update step monotonically reduces maximum link utilization (MLU), ensuring consistent performance improvement across iterations; second, the TO subproblem, equivalent to one-hop optimization, exhibits a monotonic structure that enables optimal solutions via an efficient Accelerated Binary Search Method (ABSM). To preserve the solver-free design, RO is solved using existing Traffic Engineering accelerators. ATRO attains the global optimum in one-hop scenarios and significantly outperforms baselines in multi-hop settings in terms of both runtime and solution quality. Evaluations confirm its scalability and robustness across diverse DCNs.
△ Less
Submitted 18 July, 2025;
originally announced July 2025.
-
Apple Intelligence Foundation Language Models: Tech Report 2025
Authors:
Hanzhi Zhou,
Erik Hornberger,
Pengsheng Guo,
Xiyou Zhou,
Saiwen Wang,
Xin Wang,
Yifei He,
Xuankai Chang,
Rene Rauch,
Louis D'hauwe,
John Peebles,
Alec Doane,
Kohen Chia,
Jenna Thibodeau,
Zi-Yi Dou,
Yuanyang Zhang,
Ruoming Pang,
Reed Li,
Zhifeng Chen,
Jeremy Warner,
Zhaoyang Xu,
Sophy Lee,
David Mizrahi,
Ramsey Tantawi,
Chris Chaney
, et al. (370 additional authors not shown)
Abstract:
We introduce two multilingual, multimodal foundation language models that power Apple Intelligence features across Apple devices and services: i a 3B-parameter on-device model optimized for Apple silicon through architectural innovations such as KV-cache sharing and 2-bit quantization-aware training; and ii a scalable server model built on a novel Parallel-Track Mixture-of-Experts PT-MoE transform…
▽ More
We introduce two multilingual, multimodal foundation language models that power Apple Intelligence features across Apple devices and services: i a 3B-parameter on-device model optimized for Apple silicon through architectural innovations such as KV-cache sharing and 2-bit quantization-aware training; and ii a scalable server model built on a novel Parallel-Track Mixture-of-Experts PT-MoE transformer that combines track parallelism, mixture-of-experts sparse computation, and interleaved global-local attention to deliver high quality with competitive cost on Apple's Private Cloud Compute platform. Both models are trained on large-scale multilingual and multimodal datasets sourced via responsible web crawling, licensed corpora, and high-quality synthetic data, then further refined with supervised fine-tuning and reinforcement learning on a new asynchronous platform. The resulting models support several additional languages while understanding images and executing tool calls. In public benchmarks and human evaluations, both the server model and the on-device model match or surpass comparably sized open baselines.
A new Swift-centric Foundation Models framework exposes guided generation, constrained tool calling, and LoRA adapter fine-tuning, allowing developers to integrate these capabilities with a few lines of code. The latest advancements in Apple Intelligence models are grounded in our Responsible AI approach with safeguards like content filtering and locale-specific evaluation, as well as our commitment to protecting our users' privacy with innovations like Private Cloud Compute.
△ Less
Submitted 17 July, 2025;
originally announced July 2025.
-
HRSeg: High-Resolution Visual Perception and Enhancement for Reasoning Segmentation
Authors:
Weihuang Lin,
Yiwei Ma,
Xiaoshuai Sun,
Shuting He,
Jiayi Ji,
Liujuan Cao,
Rongrong Ji
Abstract:
The reasoning segmentation task involves segmenting objects within an image by interpreting implicit user instructions, which may encompass subtleties such as contextual cues and open-world knowledge. Despite significant advancements made by existing approaches, they remain constrained by low perceptual resolution, as visual encoders are typically pre-trained at lower resolutions. Furthermore, sim…
▽ More
The reasoning segmentation task involves segmenting objects within an image by interpreting implicit user instructions, which may encompass subtleties such as contextual cues and open-world knowledge. Despite significant advancements made by existing approaches, they remain constrained by low perceptual resolution, as visual encoders are typically pre-trained at lower resolutions. Furthermore, simply interpolating the positional embeddings of visual encoders to enhance perceptual resolution yields only marginal performance improvements while incurring substantial computational costs. To address this, we propose HRSeg, an efficient model with high-resolution fine-grained perception. It features two key innovations: High-Resolution Perception (HRP) and High-Resolution Enhancement (HRE). The HRP module processes high-resolution images through cropping, integrating local and global features for multi-granularity quality. The HRE module enhances mask features by integrating fine-grained information from high-resolution images, refining their alignment with text features for precise segmentation. Extensive ablation studies validate the effectiveness of our modules, while comprehensive experiments on multiple benchmark datasets demonstrate HRSeg's superior performance.
△ Less
Submitted 17 July, 2025;
originally announced July 2025.
-
PatternSight: A Perceptual Grouping Effectiveness Assessment Approach for Graphical Patterns in Charts
Authors:
Xumeng Wang,
Xiangxuan Zhang,
Zhiqi Gao,
Shuangcheng Jiao,
Yuxin Ma
Abstract:
The boom in visualization generation tools has significantly lowered the threshold for chart authoring. Nevertheless, chart authors with an insufficient understanding of perceptual theories may encounter difficulties in evaluating the effectiveness of chart representations, thereby struggling to identify the appropriate chart design to convey the intended data patterns. To address this issue, we p…
▽ More
The boom in visualization generation tools has significantly lowered the threshold for chart authoring. Nevertheless, chart authors with an insufficient understanding of perceptual theories may encounter difficulties in evaluating the effectiveness of chart representations, thereby struggling to identify the appropriate chart design to convey the intended data patterns. To address this issue, we propose a perception simulation model that can assess the perceptual effectiveness of charts by predicting graphical patterns that chart viewers are likely to notice. The perception simulation model integrates perceptual theory into visual feature extraction of chart elements to provide interpretable model outcomes. Human perceptual results proved that the outcome of our model can simulate the perceptual grouping behaviors of most chart viewers and cover diverse perceptual results. We also embed the model into a prototype interface called PatternSight to facilitate chart authors in assessing whether the chart design can satisfy their pattern representation requirements as expected and determining feasible improvements of visual design. According to the results of a user experiment, PatternSight can effectively assist chart authors in optimizing chart design for representing data patterns.
△ Less
Submitted 16 July, 2025;
originally announced July 2025.
-
ReAL-AD: Towards Human-Like Reasoning in End-to-End Autonomous Driving
Authors:
Yuhang Lu,
Jiadong Tu,
Yuexin Ma,
Xinge Zhu
Abstract:
End-to-end autonomous driving has emerged as a promising approach to unify perception, prediction, and planning within a single framework, reducing information loss and improving adaptability. However, existing methods often rely on fixed and sparse trajectory supervision, limiting their ability to capture the hierarchical reasoning process that human drivers naturally employ. To bridge this gap,…
▽ More
End-to-end autonomous driving has emerged as a promising approach to unify perception, prediction, and planning within a single framework, reducing information loss and improving adaptability. However, existing methods often rely on fixed and sparse trajectory supervision, limiting their ability to capture the hierarchical reasoning process that human drivers naturally employ. To bridge this gap, we propose ReAL-AD, a Reasoning-Augmented Learning framework that structures decision-making in autonomous driving based on the three-tier human cognitive model: Driving Strategy, Driving Decision, and Driving Operation, where Vision-Language Models (VLMs) are incorporated to enhance situational awareness and structured reasoning across these levels. Specifically, we introduce: (1) the Strategic Reasoning Injector, which formulates high-level driving strategies by interpreting complex traffic contexts from VLM-generated insights; (2) the Tactical Reasoning Integrator, which refines strategic intent into interpretable tactical choices such as lane changes, overtaking, and speed adjustments; and (3) the Hierarchical Trajectory Decoder, which progressively translates tactical decisions into precise control actions for smooth and human-like trajectory execution. Extensive evaluations show that integrating our framework improves planning accuracy and safety by over 30%, making end-to-end autonomous driving more interpretable and aligned with human-like hierarchical reasoning. The project page can be found at: \href{https://4dvlab.github.io/project_page/realad}{\texttt{4dvlab.github.io/project\_page/realad}}
△ Less
Submitted 15 July, 2025;
originally announced July 2025.
-
Leveraging Bi-Directional Channel Reciprocity for Robust Ultra-Low-Rate Implicit CSI Feedback with Deep Learning
Authors:
Zhenyu Liu,
Yi Ma,
Rahim Tafazolli,
Zhi Ding
Abstract:
Deep learning-based implicit channel state information (CSI) feedback has been introduced to enhance spectral efficiency in massive MIMO systems. Existing methods often show performance degradation in ultra-low-rate scenarios and inadaptability across diverse environments. In this paper, we propose Dual-ImRUNet, an efficient uplink-assisted deep implicit CSI feedback framework incorporating two no…
▽ More
Deep learning-based implicit channel state information (CSI) feedback has been introduced to enhance spectral efficiency in massive MIMO systems. Existing methods often show performance degradation in ultra-low-rate scenarios and inadaptability across diverse environments. In this paper, we propose Dual-ImRUNet, an efficient uplink-assisted deep implicit CSI feedback framework incorporating two novel plug-in preprocessing modules to achieve ultra-low feedback rates while maintaining high environmental robustness. First, a novel bi-directional correlation enhancement module is proposed to strengthen the correlation between uplink and downlink CSI eigenvector matrices. This module projects highly correlated uplink and downlink channel matrices into their respective eigenspaces, effectively reducing redundancy for ultra-low-rate feedback. Second, an innovative input format alignment module is designed to maintain consistent data distributions at both encoder and decoder sides without extra transmission overhead, thereby enhancing robustness against environmental variations. Finally, we develop an efficient transformer-based implicit CSI feedback network to exploit angular-delay domain sparsity and bi-directional correlation for ultra-low-rate CSI compression. Simulation results demonstrate successful reduction of the feedback overhead by 85% compared with the state-of-the-art method and robustness against unseen environments.
△ Less
Submitted 16 July, 2025;
originally announced July 2025.
-
MOSPA: Human Motion Generation Driven by Spatial Audio
Authors:
Shuyang Xu,
Zhiyang Dou,
Mingyi Shi,
Liang Pan,
Leo Ho,
Jingbo Wang,
Yuan Liu,
Cheng Lin,
Yuexin Ma,
Wenping Wang,
Taku Komura
Abstract:
Enabling virtual humans to dynamically and realistically respond to diverse auditory stimuli remains a key challenge in character animation, demanding the integration of perceptual modeling and motion synthesis. Despite its significance, this task remains largely unexplored. Most previous works have primarily focused on mapping modalities like speech, audio, and music to generate human motion. As…
▽ More
Enabling virtual humans to dynamically and realistically respond to diverse auditory stimuli remains a key challenge in character animation, demanding the integration of perceptual modeling and motion synthesis. Despite its significance, this task remains largely unexplored. Most previous works have primarily focused on mapping modalities like speech, audio, and music to generate human motion. As of yet, these models typically overlook the impact of spatial features encoded in spatial audio signals on human motion. To bridge this gap and enable high-quality modeling of human movements in response to spatial audio, we introduce the first comprehensive Spatial Audio-Driven Human Motion (SAM) dataset, which contains diverse and high-quality spatial audio and motion data. For benchmarking, we develop a simple yet effective diffusion-based generative framework for human MOtion generation driven by SPatial Audio, termed MOSPA, which faithfully captures the relationship between body motion and spatial audio through an effective fusion mechanism. Once trained, MOSPA could generate diverse realistic human motions conditioned on varying spatial audio inputs. We perform a thorough investigation of the proposed dataset and conduct extensive experiments for benchmarking, where our method achieves state-of-the-art performance on this task. Our model and dataset will be open-sourced upon acceptance. Please refer to our supplementary video for more details.
△ Less
Submitted 16 July, 2025;
originally announced July 2025.
-
ARMOR: Aligning Secure and Safe Large Language Models via Meticulous Reasoning
Authors:
Zhengyue Zhao,
Yingzi Ma,
Somesh Jha,
Marco Pavone,
Chaowei Xiao
Abstract:
Large Language Models (LLMs) have demonstrated remarkable generative capabilities. However, their susceptibility to misuse has raised significant safety concerns. While post-training safety alignment methods have been widely adopted, LLMs remain vulnerable to malicious instructions that can bypass safety constraints. Recent efforts have introduced inference-time safety reasoning (system-2 alignmen…
▽ More
Large Language Models (LLMs) have demonstrated remarkable generative capabilities. However, their susceptibility to misuse has raised significant safety concerns. While post-training safety alignment methods have been widely adopted, LLMs remain vulnerable to malicious instructions that can bypass safety constraints. Recent efforts have introduced inference-time safety reasoning (system-2 alignment), where LLMs conduct a reasoning process to perform safety verification before final response. We show, however, that these checks are driven by ad-hoc reasoning that diverges from the structured human process, where they first discern a user's true intent, then evaluate the associated risk based on the true intent. Consequently, these defenses remain vulnerable to sophisticated jailbreak prompts that cloak harmful goals in seemingly benign language. To build secure and safe LLMs, we propose a reasoning-based safety alignment framework, ARMOR, that replaces the ad-hoc chains of thought reasoning process with human-aligned, structured one. At inference, ARMOR (1) detects likely jailbreak strategies, (2) extracts the user's core intent while discarding deceptive instructions, and (3) applies a policy-grounded safety analysis to the purified request. ARMOR is evaluated on adaptive jailbreak attacks and multiple safety benchmarks, and a test-time scaling is conducted to further improve its performance. Results demonstrate that ARMOR significantly enhances the robustness against state-of-the-art adaptive jailbreak attacks and outperforms recent reasoning-based aligned models across various safety benchmarks.
△ Less
Submitted 14 July, 2025;
originally announced July 2025.
-
CogDDN: A Cognitive Demand-Driven Navigation with Decision Optimization and Dual-Process Thinking
Authors:
Yuehao Huang,
Liang Liu,
Shuangming Lei,
Yukai Ma,
Hao Su,
Jianbiao Mei,
Pengxiang Zhao,
Yaqing Gu,
Yong Liu,
Jiajun Lv
Abstract:
Mobile robots are increasingly required to navigate and interact within unknown and unstructured environments to meet human demands. Demand-driven navigation (DDN) enables robots to identify and locate objects based on implicit human intent, even when object locations are unknown. However, traditional data-driven DDN methods rely on pre-collected data for model training and decision-making, limiti…
▽ More
Mobile robots are increasingly required to navigate and interact within unknown and unstructured environments to meet human demands. Demand-driven navigation (DDN) enables robots to identify and locate objects based on implicit human intent, even when object locations are unknown. However, traditional data-driven DDN methods rely on pre-collected data for model training and decision-making, limiting their generalization capability in unseen scenarios. In this paper, we propose CogDDN, a VLM-based framework that emulates the human cognitive and learning mechanisms by integrating fast and slow thinking systems and selectively identifying key objects essential to fulfilling user demands. CogDDN identifies appropriate target objects by semantically aligning detected objects with the given instructions. Furthermore, it incorporates a dual-process decision-making module, comprising a Heuristic Process for rapid, efficient decisions and an Analytic Process that analyzes past errors, accumulates them in a knowledge base, and continuously improves performance. Chain of Thought (CoT) reasoning strengthens the decision-making process. Extensive closed-loop evaluations on the AI2Thor simulator with the ProcThor dataset show that CogDDN outperforms single-view camera-only methods by 15%, demonstrating significant improvements in navigation accuracy and adaptability. The project page is available at https://yuehaohuang.github.io/CogDDN/.
△ Less
Submitted 15 July, 2025;
originally announced July 2025.
-
Cameras as Relative Positional Encoding
Authors:
Ruilong Li,
Brent Yi,
Junchen Liu,
Hang Gao,
Yi Ma,
Angjoo Kanazawa
Abstract:
Transformers are increasingly prevalent for multi-view computer vision tasks, where geometric relationships between viewpoints are critical for 3D perception. To leverage these relationships, multi-view transformers must use camera geometry to ground visual tokens in 3D space. In this work, we compare techniques for conditioning transformers on cameras: token-level raymap encodings, attention-leve…
▽ More
Transformers are increasingly prevalent for multi-view computer vision tasks, where geometric relationships between viewpoints are critical for 3D perception. To leverage these relationships, multi-view transformers must use camera geometry to ground visual tokens in 3D space. In this work, we compare techniques for conditioning transformers on cameras: token-level raymap encodings, attention-level relative pose encodings, and a new relative encoding we propose -- Projective Positional Encoding (PRoPE) -- that captures complete camera frustums, both intrinsics and extrinsics, as a relative positional encoding. Our experiments begin by showing how relative camera conditioning improves performance in feedforward novel view synthesis, with further gains from PRoPE. This holds across settings: scenes with both shared and varying intrinsics, when combining token- and attention-level conditioning, and for generalization to inputs with out-of-distribution sequence lengths and camera intrinsics. We then verify that these benefits persist for different tasks, stereo depth estimation and discriminative spatial cognition, as well as larger model sizes.
△ Less
Submitted 14 July, 2025;
originally announced July 2025.
-
InstaScene: Towards Complete 3D Instance Decomposition and Reconstruction from Cluttered Scenes
Authors:
Zesong Yang,
Bangbang Yang,
Wenqi Dong,
Chenxuan Cao,
Liyuan Cui,
Yuewen Ma,
Zhaopeng Cui,
Hujun Bao
Abstract:
Humans can naturally identify and mentally complete occluded objects in cluttered environments. However, imparting similar cognitive ability to robotics remains challenging even with advanced reconstruction techniques, which models scenes as undifferentiated wholes and fails to recognize complete object from partial observations. In this paper, we propose InstaScene, a new paradigm towards holisti…
▽ More
Humans can naturally identify and mentally complete occluded objects in cluttered environments. However, imparting similar cognitive ability to robotics remains challenging even with advanced reconstruction techniques, which models scenes as undifferentiated wholes and fails to recognize complete object from partial observations. In this paper, we propose InstaScene, a new paradigm towards holistic 3D perception of complex scenes with a primary goal: decomposing arbitrary instances while ensuring complete reconstruction. To achieve precise decomposition, we develop a novel spatial contrastive learning by tracing rasterization of each instance across views, significantly enhancing semantic supervision in cluttered scenes. To overcome incompleteness from limited observations, we introduce in-situ generation that harnesses valuable observations and geometric cues, effectively guiding 3D generative models to reconstruct complete instances that seamlessly align with the real world. Experiments on scene decomposition and object completion across complex real-world and synthetic scenes demonstrate that our method achieves superior decomposition accuracy while producing geometrically faithful and visually intact objects.
△ Less
Submitted 21 July, 2025; v1 submitted 11 July, 2025;
originally announced July 2025.
-
M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning
Authors:
Inclusion AI,
:,
Fudong Wang,
Jiajia Liu,
Jingdong Chen,
Jun Zhou,
Kaixiang Ji,
Lixiang Ru,
Qingpei Guo,
Ruobing Zheng,
Tianqi Li,
Yi Yuan,
Yifan Mao,
Yuting Xiao,
Ziping Ma
Abstract:
Recent advancements in Multimodal Large Language Models (MLLMs), particularly through Reinforcement Learning with Verifiable Rewards (RLVR), have significantly enhanced their reasoning abilities. However, a critical gap persists: these models struggle with dynamic spatial interactions, a capability essential for real-world applications. To bridge this gap, we introduce M2-Reasoning-7B, a model des…
▽ More
Recent advancements in Multimodal Large Language Models (MLLMs), particularly through Reinforcement Learning with Verifiable Rewards (RLVR), have significantly enhanced their reasoning abilities. However, a critical gap persists: these models struggle with dynamic spatial interactions, a capability essential for real-world applications. To bridge this gap, we introduce M2-Reasoning-7B, a model designed to excel in both general and spatial reasoning. Our approach integrates two key innovations: (1) a novel data pipeline that generates 294.2K high-quality data samples (168K for cold-start fine-tuning and 126.2K for RLVR), which feature logically coherent reasoning trajectories and have undergone comprehensive assessment; and (2) a dynamic multi-task training strategy with step-wise optimization to mitigate conflicts between data, and task-specific rewards for delivering tailored incentive signals. This combination of curated data and advanced training allows M2-Reasoning-7B to set a new state-of-the-art (SOTA) across 8 benchmarks, showcasing superior performance in both general and spatial reasoning domains.
△ Less
Submitted 11 July, 2025;
originally announced July 2025.
-
Invariant-based Robust Weights Watermark for Large Language Models
Authors:
Qingxiao Guo,
Xinjie Zhu,
Yilong Ma,
Hui Jin,
Yunhao Wang,
Weifeng Zhang,
Xiaobing Guo
Abstract:
Watermarking technology has gained significant attention due to the increasing importance of intellectual property (IP) rights, particularly with the growing deployment of large language models (LLMs) on billions resource-constrained edge devices. To counter the potential threats of IP theft by malicious users, this paper introduces a robust watermarking scheme without retraining or fine-tuning fo…
▽ More
Watermarking technology has gained significant attention due to the increasing importance of intellectual property (IP) rights, particularly with the growing deployment of large language models (LLMs) on billions resource-constrained edge devices. To counter the potential threats of IP theft by malicious users, this paper introduces a robust watermarking scheme without retraining or fine-tuning for transformer models. The scheme generates a unique key for each user and derives a stable watermark value by solving linear constraints constructed from model invariants. Moreover, this technology utilizes noise mechanism to hide watermark locations in multi-user scenarios against collusion attack. This paper evaluates the approach on three popular models (Llama3, Phi3, Gemma), and the experimental results confirm the strong robustness across a range of attack methods (fine-tuning, pruning, quantization, permutation, scaling, reversible matrix and collusion attacks).
△ Less
Submitted 10 July, 2025;
originally announced July 2025.
-
Optimal Auction Design in the Joint Advertising
Authors:
Yang Li,
Yuchao Ma,
Qi Qi
Abstract:
Online advertising is a vital revenue source for major internet platforms. Recently, joint advertising, which assigns a bundle of two advertisers in an ad slot instead of allocating a single advertiser, has emerged as an effective method for enhancing allocation efficiency and revenue. However, existing mechanisms for joint advertising fail to realize the optimality, as they tend to focus on indiv…
▽ More
Online advertising is a vital revenue source for major internet platforms. Recently, joint advertising, which assigns a bundle of two advertisers in an ad slot instead of allocating a single advertiser, has emerged as an effective method for enhancing allocation efficiency and revenue. However, existing mechanisms for joint advertising fail to realize the optimality, as they tend to focus on individual advertisers and overlook bundle structures. This paper identifies an optimal mechanism for joint advertising in a single-slot setting. For multi-slot joint advertising, we propose \textbf{BundleNet}, a novel bundle-based neural network approach specifically designed for joint advertising. Our extensive experiments demonstrate that the mechanisms generated by \textbf{BundleNet} approximate the theoretical analysis results in the single-slot setting and achieve state-of-the-art performance in the multi-slot setting. This significantly increases platform revenue while ensuring approximate dominant strategy incentive compatibility and individual rationality.
△ Less
Submitted 10 July, 2025;
originally announced July 2025.
-
CCQ: Convolutional Code for Extreme Low-bit Quantization in LLMs
Authors:
Zhaojing Zhou,
Xunchao Li,
Minghao Li,
Handi Zhang,
Haoshuang Wang,
Wenbin Chang,
Yiqun Liu,
Qingqing Dang,
Dianhai Yu,
Yanjun Ma,
Haifeng Wang
Abstract:
The rapid scaling of Large Language Models (LLMs) elevates inference costs and compounds substantial deployment barriers. While quantization to 8 or 4 bits mitigates this, sub-3-bit methods face severe accuracy, scalability, and efficiency degradation. We propose Convolutional Code Quantization (CCQ), an inference-optimized quantization approach compressing LLMs to 2.0-2.75 bits with minimal accur…
▽ More
The rapid scaling of Large Language Models (LLMs) elevates inference costs and compounds substantial deployment barriers. While quantization to 8 or 4 bits mitigates this, sub-3-bit methods face severe accuracy, scalability, and efficiency degradation. We propose Convolutional Code Quantization (CCQ), an inference-optimized quantization approach compressing LLMs to 2.0-2.75 bits with minimal accuracy loss. Departing from error-prone scalar quantization or slow vector quantization, CCQ integrates a hardware-aware bit-shift encoding and decoding solution with Convolutional Code, Hybrid Encoding, and Code Cluster, jointly overcoming accuracy-speed bottlenecks. We construct a lookup-free encoding space, enabling a linear mapping between the codebook and weight vectors, thereby optimizing inference performance. Meanwhile, by drawing on the concept of data mapping from vector quantization, we minimize the performance degradation of the model under extremely low-bit conditions. Experiments demonstrate that CCQ achieves outstanding performance on LLMs across various benchmarks. We compress DeepSeek-V3 (671B total parameters) to 184GB and ERNIE-4.5-300B-A47B to 89GB, enabling single-GPU deployment of ERNIE 4.5 and eliminating inter-card communication. The 2-bit ERNIE-4.5-300B-A47B model and inference engine have been open-sourced.
△ Less
Submitted 9 July, 2025;
originally announced July 2025.
-
Squeeze the Soaked Sponge: Efficient Off-policy Reinforcement Finetuning for Large Language Model
Authors:
Jing Liang,
Hongyao Tang,
Yi Ma,
Jinyi Liu,
Yan Zheng,
Shuyue Hu,
Lei Bai,
Jianye Hao
Abstract:
Reinforcement Learning (RL) has demonstrated its potential to improve the reasoning ability of Large Language Models (LLMs). One major limitation of most existing Reinforcement Finetuning (RFT) methods is that they are on-policy RL in nature, i.e., data generated during the past learning process is not fully utilized. This inevitably comes at a significant cost of compute and time, posing a string…
▽ More
Reinforcement Learning (RL) has demonstrated its potential to improve the reasoning ability of Large Language Models (LLMs). One major limitation of most existing Reinforcement Finetuning (RFT) methods is that they are on-policy RL in nature, i.e., data generated during the past learning process is not fully utilized. This inevitably comes at a significant cost of compute and time, posing a stringent bottleneck on continuing economic and efficient scaling. To this end, we launch the renaissance of off-policy RL and propose Reincarnating Mix-policy Proximal Policy Gradient (ReMix), a general approach to enable on-policy RFT methods like PPO and GRPO to leverage off-policy data. ReMix consists of three major components: (1) Mix-policy proximal policy gradient with an increased Update-To-Data (UTD) ratio for efficient training; (2) KL-Convex policy constraint to balance the trade-off between stability and flexibility; (3) Policy reincarnation to achieve a seamless transition from efficient early-stage learning to steady asymptotic improvement. In our experiments, we train a series of ReMix models upon PPO, GRPO and 1.5B, 7B base models. ReMix shows an average Pass@1 accuracy of 52.10% (for 1.5B model) with 0.079M response rollouts, 350 training steps and achieves 63.27%/64.39% (for 7B model) with 0.007M/0.011M response rollouts, 50/75 training steps, on five math reasoning benchmarks (i.e., AIME'24, AMC'23, Minerva, OlympiadBench, and MATH500). Compared with 15 recent advanced models, ReMix shows SOTA-level performance with an over 30x to 450x reduction in training cost in terms of rollout data volume. In addition, we reveal insightful findings via multifaceted analysis, including the implicit preference for shorter responses due to the Whipping Effect of off-policy discrepancy, the collapse mode of self-reflection behavior under the presence of severe off-policyness, etc.
△ Less
Submitted 11 July, 2025; v1 submitted 9 July, 2025;
originally announced July 2025.
-
On the Hardness of Unsupervised Domain Adaptation: Optimal Learners and Information-Theoretic Perspective
Authors:
Zhiyi Dong,
Zixuan Liu,
Yongyi Mao
Abstract:
This paper studies the hardness of unsupervised domain adaptation (UDA) under covariate shift. We model the uncertainty that the learner faces by a distribution $π$ in the ground-truth triples $(p, q, f)$ -- which we call a UDA class -- where $(p, q)$ is the source -- target distribution pair and $f$ is the classifier. We define the performance of a learner as the overall target domain risk, avera…
▽ More
This paper studies the hardness of unsupervised domain adaptation (UDA) under covariate shift. We model the uncertainty that the learner faces by a distribution $π$ in the ground-truth triples $(p, q, f)$ -- which we call a UDA class -- where $(p, q)$ is the source -- target distribution pair and $f$ is the classifier. We define the performance of a learner as the overall target domain risk, averaged over the randomness of the ground-truth triple. This formulation couples the source distribution, the target distribution and the classifier in the ground truth, and deviates from the classical worst-case analyses, which pessimistically emphasize the impact of hard but rare UDA instances. In this formulation, we precisely characterize the optimal learner. The performance of the optimal learner then allows us to define the learning difficulty for the UDA class and for the observed sample. To quantify this difficulty, we introduce an information-theoretic quantity -- Posterior Target Label Uncertainty (PTLU) -- along with its empirical estimate (EPTLU) from the sample , which capture the uncertainty in the prediction for the target domain. Briefly, PTLU is the entropy of the predicted label in the target domain under the posterior distribution of ground-truth classifier given the observed source and target samples. By proving that such a quantity serves to lower-bound the risk of any learner, we suggest that these quantities can be used as proxies for evaluating the hardness of UDA learning. We provide several examples to demonstrate the advantage of PTLU, relative to the existing measures, in evaluating the difficulty of UDA learning.
△ Less
Submitted 9 July, 2025;
originally announced July 2025.
-
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
Authors:
Gheorghe Comanici,
Eric Bieber,
Mike Schaekermann,
Ice Pasupat,
Noveen Sachdeva,
Inderjit Dhillon,
Marcel Blistein,
Ori Ram,
Dan Zhang,
Evan Rosen,
Luke Marris,
Sam Petulla,
Colin Gaffney,
Asaf Aharoni,
Nathan Lintz,
Tiago Cardal Pais,
Henrik Jacobsson,
Idan Szpektor,
Nan-Jiang Jiang,
Krishna Haridasan,
Ahmed Omran,
Nikunj Saunshi,
Dara Bahri,
Gaurav Mishra,
Eric Chu
, et al. (3284 additional authors not shown)
Abstract:
In this report, we introduce the Gemini 2.X model family: Gemini 2.5 Pro and Gemini 2.5 Flash, as well as our earlier Gemini 2.0 Flash and Flash-Lite models. Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its incredible coding and reasoning skills, Gemini 2.5 Pro is a thinking model that excels at multimodal unde…
▽ More
In this report, we introduce the Gemini 2.X model family: Gemini 2.5 Pro and Gemini 2.5 Flash, as well as our earlier Gemini 2.0 Flash and Flash-Lite models. Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its incredible coding and reasoning skills, Gemini 2.5 Pro is a thinking model that excels at multimodal understanding and it is now able to process up to 3 hours of video content. Its unique combination of long context, multimodal and reasoning capabilities can be combined to unlock new agentic workflows. Gemini 2.5 Flash provides excellent reasoning abilities at a fraction of the compute and latency requirements and Gemini 2.0 Flash and Flash-Lite provide high performance at low latency and cost. Taken together, the Gemini 2.X model generation spans the full Pareto frontier of model capability vs cost, allowing users to explore the boundaries of what is possible with complex agentic problem solving.
△ Less
Submitted 22 July, 2025; v1 submitted 7 July, 2025;
originally announced July 2025.
-
PrefixAgent: An LLM-Powered Design Framework for Efficient Prefix Adder Optimization
Authors:
Dongsheng Zuo,
Jiadong Zhu,
Yang Luo,
Yuzhe Ma
Abstract:
Prefix adders are fundamental arithmetic circuits, but their design space grows exponentially with bit-width, posing significant optimization challenges. Previous works face limitations in performance, generalization, and scalability. To address these challenges, we propose PrefixAgent, a large language model (LLM)-powered framework that enables efficient prefix adder optimization. Specifically, P…
▽ More
Prefix adders are fundamental arithmetic circuits, but their design space grows exponentially with bit-width, posing significant optimization challenges. Previous works face limitations in performance, generalization, and scalability. To address these challenges, we propose PrefixAgent, a large language model (LLM)-powered framework that enables efficient prefix adder optimization. Specifically, PrefixAgent reformulates the problem into subtasks including backbone synthesis and structure refinement, which effectively reduces the search space. More importantly, this new design perspective enables us to efficiently collect enormous high-quality data and reasoning traces with E-graph, which further results in an effective fine-tuning of LLM. Experimental results show that PrefixAgent synthesizes prefix adders with consistently smaller areas compared to baseline methods, while maintaining scalability and generalization in commercial EDA flows.
△ Less
Submitted 8 July, 2025;
originally announced July 2025.
-
From General Relation Patterns to Task-Specific Decision-Making in Continual Multi-Agent Coordination
Authors:
Chang Yao,
Youfang Lin,
Shoucheng Song,
Hao Wu,
Yuqing Ma,
Shang Han,
Kai Lv
Abstract:
Continual Multi-Agent Reinforcement Learning (Co-MARL) requires agents to address catastrophic forgetting issues while learning new coordination policies with the dynamics team. In this paper, we delve into the core of Co-MARL, namely Relation Patterns, which refer to agents' general understanding of interactions. In addition to generality, relation patterns exhibit task-specificity when mapped to…
▽ More
Continual Multi-Agent Reinforcement Learning (Co-MARL) requires agents to address catastrophic forgetting issues while learning new coordination policies with the dynamics team. In this paper, we delve into the core of Co-MARL, namely Relation Patterns, which refer to agents' general understanding of interactions. In addition to generality, relation patterns exhibit task-specificity when mapped to different action spaces. To this end, we propose a novel method called General Relation Patterns-Guided Task-Specific Decision-Maker (RPG). In RPG, agents extract relation patterns from dynamic observation spaces using a relation capturer. These task-agnostic relation patterns are then mapped to different action spaces via a task-specific decision-maker generated by a conditional hypernetwork. To combat forgetting, we further introduce regularization items on both the relation capturer and the conditional hypernetwork. Results on SMAC and LBF demonstrate that RPG effectively prevents catastrophic forgetting when learning new tasks and achieves zero-shot generalization to unseen tasks.
△ Less
Submitted 8 July, 2025;
originally announced July 2025.
-
Text-Guided Token Communication for Wireless Image Transmission
Authors:
Bole Liu,
Li Qiao,
Ye Wang,
Zhen Gao,
Yu Ma,
Keke Ying,
Tong Qin
Abstract:
With the emergence of 6G networks and proliferation of visual applications, efficient image transmission under adverse channel conditions is critical. We present a text-guided token communication system leveraging pre-trained foundation models for wireless image transmission with low bandwidth. Our approach converts images to discrete tokens, applies 5G NR polar coding, and employs text-guided tok…
▽ More
With the emergence of 6G networks and proliferation of visual applications, efficient image transmission under adverse channel conditions is critical. We present a text-guided token communication system leveraging pre-trained foundation models for wireless image transmission with low bandwidth. Our approach converts images to discrete tokens, applies 5G NR polar coding, and employs text-guided token prediction for reconstruction. Evaluations on ImageNet show our method outperforms Deep Source Channel Coding with Attention Modules (ADJSCC) in perceptual quality and semantic preservation at Signal-to-Noise Ratios (SNRs) above 0 dB while mitigating the cliff effect at lower SNRs. Our system requires no scenario-specific retraining and exhibits superior cross-dataset generalization, establishing a new paradigm for efficient image transmission aligned with human perceptual priorities.
△ Less
Submitted 8 July, 2025;
originally announced July 2025.
-
PaddleOCR 3.0 Technical Report
Authors:
Cheng Cui,
Ting Sun,
Manhui Lin,
Tingquan Gao,
Yubo Zhang,
Jiaxuan Liu,
Xueqing Wang,
Zelun Zhang,
Changda Zhou,
Hongen Liu,
Yue Zhang,
Wenyu Lv,
Kui Huang,
Yichao Zhang,
Jing Zhang,
Jun Zhang,
Yi Liu,
Dianhai Yu,
Yanjun Ma
Abstract:
This technical report introduces PaddleOCR 3.0, an Apache-licensed open-source toolkit for OCR and document parsing. To address the growing demand for document understanding in the era of large language models, PaddleOCR 3.0 presents three major solutions: (1) PP-OCRv5 for multilingual text recognition, (2) PP-StructureV3 for hierarchical document parsing, and (3) PP-ChatOCRv4 for key information…
▽ More
This technical report introduces PaddleOCR 3.0, an Apache-licensed open-source toolkit for OCR and document parsing. To address the growing demand for document understanding in the era of large language models, PaddleOCR 3.0 presents three major solutions: (1) PP-OCRv5 for multilingual text recognition, (2) PP-StructureV3 for hierarchical document parsing, and (3) PP-ChatOCRv4 for key information extraction. Compared to mainstream vision-language models (VLMs), these models with fewer than 100 million parameters achieve competitive accuracy and efficiency, rivaling billion-parameter VLMs. In addition to offering a high-quality OCR model library, PaddleOCR 3.0 provides efficient tools for training, inference, and deployment, supports heterogeneous hardware acceleration, and enables developers to easily build intelligent document applications.
△ Less
Submitted 7 July, 2025;
originally announced July 2025.
-
AXLearn: Modular Large Model Training on Heterogeneous Infrastructure
Authors:
Mark Lee,
Tom Gunter,
Chang Lan,
John Peebles,
Hanzhi Zhou,
Kelvin Zou,
Sneha Bangalore,
Chung-Cheng Chiu,
Nan Du,
Xianzhi Du,
Philipp Dufter,
Ruixuan Hou,
Haoshuo Huang,
Dongseong Hwang,
Xiang Kong,
Jinhao Lei,
Tao Lei,
Meng Li,
Li Li,
Jiarui Lu,
Zhiyun Lu,
Yiping Ma,
David Qiu,
Vivek Rathod,
Senyu Tong
, et al. (12 additional authors not shown)
Abstract:
We design and implement AXLearn, a production deep learning system that facilitates scalable and high-performance training of large deep learning models. Compared to other state-of-the-art deep learning systems, AXLearn has a unique focus on modularity and support for heterogeneous hardware infrastructure. AXLearn's internal interfaces between software components follow strict encapsulation, allow…
▽ More
We design and implement AXLearn, a production deep learning system that facilitates scalable and high-performance training of large deep learning models. Compared to other state-of-the-art deep learning systems, AXLearn has a unique focus on modularity and support for heterogeneous hardware infrastructure. AXLearn's internal interfaces between software components follow strict encapsulation, allowing different components to be assembled to facilitate rapid model development and experimentation on heterogeneous compute infrastructure. We introduce a novel method of quantifying modularity via Lines-of-Code (LoC)-complexity, which demonstrates how our system maintains constant complexity as we scale the components in the system, compared to linear or quadratic complexity in other systems. This allows integrating features such as Rotary Position Embeddings (RoPE) into AXLearn across hundred of modules with just 10 lines of code, compared to hundreds as required in other systems. At the same time, AXLearn maintains equivalent performance compared to state-of-the-art training systems. Finally, we share our experience in the development and operation of AXLearn.
△ Less
Submitted 9 July, 2025; v1 submitted 7 July, 2025;
originally announced July 2025.
-
Beyond Simple Edits: X-Planner for Complex Instruction-Based Image Editing
Authors:
Chun-Hsiao Yeh,
Yilin Wang,
Nanxuan Zhao,
Richard Zhang,
Yuheng Li,
Yi Ma,
Krishna Kumar Singh
Abstract:
Recent diffusion-based image editing methods have significantly advanced text-guided tasks but often struggle to interpret complex, indirect instructions. Moreover, current models frequently suffer from poor identity preservation, unintended edits, or rely heavily on manual masks. To address these challenges, we introduce X-Planner, a Multimodal Large Language Model (MLLM)-based planning system th…
▽ More
Recent diffusion-based image editing methods have significantly advanced text-guided tasks but often struggle to interpret complex, indirect instructions. Moreover, current models frequently suffer from poor identity preservation, unintended edits, or rely heavily on manual masks. To address these challenges, we introduce X-Planner, a Multimodal Large Language Model (MLLM)-based planning system that effectively bridges user intent with editing model capabilities. X-Planner employs chain-of-thought reasoning to systematically decompose complex instructions into simpler, clear sub-instructions. For each sub-instruction, X-Planner automatically generates precise edit types and segmentation masks, eliminating manual intervention and ensuring localized, identity-preserving edits. Additionally, we propose a novel automated pipeline for generating large-scale data to train X-Planner which achieves state-of-the-art results on both existing benchmarks and our newly introduced complex editing benchmark.
△ Less
Submitted 7 July, 2025;
originally announced July 2025.
-
ABench-Physics: Benchmarking Physical Reasoning in LLMs via High-Difficulty and Dynamic Physics Problems
Authors:
Yiming Zhang,
Yingfan Ma,
Yanmei Gu,
Zhengkai Yang,
Yihong Zhuang,
Feng Wang,
Zenan Huang,
Yuanyuan Wang,
Chao Huang,
Bowen Song,
Cheng Lin,
Junbo Zhao
Abstract:
Large Language Models (LLMs) have shown impressive performance in domains such as mathematics and programming, yet their capabilities in physics remain underexplored and poorly understood. Physics poses unique challenges that demand not only precise computation but also deep conceptual understanding and physical modeling skills. Existing benchmarks often fall short due to limited difficulty, multi…
▽ More
Large Language Models (LLMs) have shown impressive performance in domains such as mathematics and programming, yet their capabilities in physics remain underexplored and poorly understood. Physics poses unique challenges that demand not only precise computation but also deep conceptual understanding and physical modeling skills. Existing benchmarks often fall short due to limited difficulty, multiple-choice formats, and static evaluation settings that fail to capture physical modeling ability. In this paper, we introduce ABench-Physics, a novel benchmark designed to rigorously evaluate LLMs' physical reasoning and generalization capabilities. ABench-Physics consists of two components: Phy_A, a static set of 400 graduate- or Olympiad-level problems; and Phy_B, a dynamic subset of 100 problems equipped with an automatic variation engine to test model robustness across changing conditions. All questions require precise numerical answers, with strict formatting and tolerance constraints. Our evaluation of several state-of-the-art LLMs reveals substantial performance gaps, highlighting persistent limitations in physical reasoning, especially in generalization to dynamic variants. ABench-Physics provides a challenging and diagnostic framework for advancing scientific reasoning in LLMs.
△ Less
Submitted 7 July, 2025;
originally announced July 2025.
-
Can Prompt Difficulty be Online Predicted for Accelerating RL Finetuning of Reasoning Models?
Authors:
Yun Qu,
Qi Cheems Wang,
Yixiu Mao,
Vincent Tao Hu,
Xiangyang Ji
Abstract:
Recent advances have witnessed the effectiveness of reinforcement learning (RL) finetuning in enhancing the reasoning capabilities of large language models (LLMs). The optimization process often requires numerous iterations to achieve satisfactory performance, resulting in high computational costs due to the need for frequent prompt evaluations under intensive LLM interactions and repeated policy…
▽ More
Recent advances have witnessed the effectiveness of reinforcement learning (RL) finetuning in enhancing the reasoning capabilities of large language models (LLMs). The optimization process often requires numerous iterations to achieve satisfactory performance, resulting in high computational costs due to the need for frequent prompt evaluations under intensive LLM interactions and repeated policy updates. Appropriate online prompt selection methods reduce iteration steps by prioritizing informative prompts during training, while the pipeline's reliance on exhaustive prompt evaluation and subset selection for optimization still incurs substantial computational overhead due to frequent LLM inference calls. Distinguished from these direct evaluate-then-select schemes, this work investigates iterative approximate evaluation for arbitrary prompts and introduces Model Predictive Prompt Selection (MoPPS), a Bayesian risk-predictive framework that online estimates prompt difficulty without requiring costly LLM interactions. Technically, MoPPS models each prompt's success rate as a latent variable, performs streaming Bayesian inference, and employs posterior sampling in a constructed multi-armed bandit machine, enabling sample efficient and adaptive prompt selection. Extensive experiments across mathematics, planning, and vision-based geometry tasks show that MoPPS reliably predicts prompt difficulty and accelerates training with significantly reduced LLM rollouts.
△ Less
Submitted 16 July, 2025; v1 submitted 6 July, 2025;
originally announced July 2025.
-
QR-LoRA: Efficient and Disentangled Fine-tuning via QR Decomposition for Customized Generation
Authors:
Jiahui Yang,
Yongjia Ma,
Donglin Di,
Hao Li,
Wei Chen,
Yan Xie,
Jianxun Cui,
Xun Yang,
Wangmeng Zuo
Abstract:
Existing text-to-image models often rely on parameter fine-tuning techniques such as Low-Rank Adaptation (LoRA) to customize visual attributes. However, when combining multiple LoRA models for content-style fusion tasks, unstructured modifications of weight matrices often lead to undesired feature entanglement between content and style attributes. We propose QR-LoRA, a novel fine-tuning framework…
▽ More
Existing text-to-image models often rely on parameter fine-tuning techniques such as Low-Rank Adaptation (LoRA) to customize visual attributes. However, when combining multiple LoRA models for content-style fusion tasks, unstructured modifications of weight matrices often lead to undesired feature entanglement between content and style attributes. We propose QR-LoRA, a novel fine-tuning framework leveraging QR decomposition for structured parameter updates that effectively separate visual attributes. Our key insight is that the orthogonal Q matrix naturally minimizes interference between different visual features, while the upper triangular R matrix efficiently encodes attribute-specific transformations. Our approach fixes both Q and R matrices while only training an additional task-specific $ΔR$ matrix. This structured design reduces trainable parameters to half of conventional LoRA methods and supports effective merging of multiple adaptations without cross-contamination due to the strong disentanglement properties between $ΔR$ matrices. Experiments demonstrate that QR-LoRA achieves superior disentanglement in content-style fusion tasks, establishing a new paradigm for parameter-efficient, disentangled fine-tuning in generative models.
△ Less
Submitted 6 July, 2025;
originally announced July 2025.
-
An Efficient Max-Min Fair Resource Optimization Algorithm for Rate-Splitting Multiple Access
Authors:
Facheng Luo,
Yijie Mao
Abstract:
The max-min fairness (MMF) problem in rate-splitting multiple access (RSMA) is known to be challenging due to its non-convex and non-smooth nature, as well as the coupled beamforming and common rate variables. Conventional algorithms to address this problem often incur high computational complexity or degraded MMF rate performance. To address these challenges, in this work, we propose a novel opti…
▽ More
The max-min fairness (MMF) problem in rate-splitting multiple access (RSMA) is known to be challenging due to its non-convex and non-smooth nature, as well as the coupled beamforming and common rate variables. Conventional algorithms to address this problem often incur high computational complexity or degraded MMF rate performance. To address these challenges, in this work, we propose a novel optimization algorithm named extragradient-fractional programming (EG-FP) to address the MMF problem of downlink RSMA. The proposed algorithm first leverages FP to transform the original problem into a block-wise convex problem. For the subproblem of precoding block, we show that its Lagrangian dual is equivalent to a variational inequality problem, which is then solved using an extragradient-based algorithm. Additionally, we discover the optimal beamforming structure of the problem and based on which, we introduce a low-dimensional EG-FP algorithm with computational complexity independent of the number of transmit antennas. This feature is especially beneficial in scenarios with a large number of transmit antennas. The proposed algorithms are then extended to handle imperfect channel state information at the transmitter (CSIT). Numerical results demonstrate that the MMF rate achieved by our proposed algorithms closely matches that of the conventional successive convex approximation (SCA) algorithm and significantly outperforms other baseline schemes. Remarkably, the average CPU time of the proposed algorithms is less than 10\% of the runtime required by the SCA algorithm, showing the efficiency and scalability of the proposed algorithms.
△ Less
Submitted 18 July, 2025; v1 submitted 5 July, 2025;
originally announced July 2025.
-
Attention-Guided Multi-Scale Local Reconstruction for Point Clouds via Masked Autoencoder Self-Supervised Learning
Authors:
Xin Cao,
Haoyu Wang,
Yuzhu Mao,
Xinda Liu,
Linzhi Su,
Kang Li
Abstract:
Self-supervised learning has emerged as a prominent research direction in point cloud processing. While existing models predominantly concentrate on reconstruction tasks at higher encoder layers, they often neglect the effective utilization of low-level local features, which are typically employed solely for activation computations rather than directly contributing to reconstruction tasks. To over…
▽ More
Self-supervised learning has emerged as a prominent research direction in point cloud processing. While existing models predominantly concentrate on reconstruction tasks at higher encoder layers, they often neglect the effective utilization of low-level local features, which are typically employed solely for activation computations rather than directly contributing to reconstruction tasks. To overcome this limitation, we introduce PointAMaLR, a novel self-supervised learning framework that enhances feature representation and processing accuracy through attention-guided multi-scale local reconstruction. PointAMaLR implements hierarchical reconstruction across multiple local regions, with lower layers focusing on fine-scale feature restoration while upper layers address coarse-scale feature reconstruction, thereby enabling complex inter-patch interactions. Furthermore, to augment feature representation capabilities, we incorporate a Local Attention (LA) module in the embedding layer to enhance semantic feature understanding. Comprehensive experiments on benchmark datasets ModelNet and ShapeNet demonstrate PointAMaLR's superior accuracy and quality in both classification and reconstruction tasks. Moreover, when evaluated on the real-world dataset ScanObjectNN and the 3D large scene segmentation dataset S3DIS, our model achieves highly competitive performance metrics. These results not only validate PointAMaLR's effectiveness in multi-scale semantic understanding but also underscore its practical applicability in real-world scenarios.
△ Less
Submitted 5 July, 2025;
originally announced July 2025.
-
SPEAR: Structured Pruning for Spiking Neural Networks via Synaptic Operation Estimation and Reinforcement Learning
Authors:
Hui Xie,
Yuhe Liu,
Shaoqi Yang,
Jinyang Guo,
Yufei Guo,
Yuqing Ma,
Jiaxin Chen,
Jiaheng Liu,
Xianglong Liu
Abstract:
While deep spiking neural networks (SNNs) demonstrate superior performance, their deployment on resource-constrained neuromorphic hardware still remains challenging. Network pruning offers a viable solution by reducing both parameters and synaptic operations (SynOps) to facilitate the edge deployment of SNNs, among which search-based pruning methods search for the SNNs structure after pruning. How…
▽ More
While deep spiking neural networks (SNNs) demonstrate superior performance, their deployment on resource-constrained neuromorphic hardware still remains challenging. Network pruning offers a viable solution by reducing both parameters and synaptic operations (SynOps) to facilitate the edge deployment of SNNs, among which search-based pruning methods search for the SNNs structure after pruning. However, existing search-based methods fail to directly use SynOps as the constraint because it will dynamically change in the searching process, resulting in the final searched network violating the expected SynOps target. In this paper, we introduce a novel SNN pruning framework called SPEAR, which leverages reinforcement learning (RL) technique to directly use SynOps as the searching constraint. To avoid the violation of SynOps requirements, we first propose a SynOps prediction mechanism called LRE to accurately predict the final SynOps after search. Observing SynOps cannot be explicitly calculated and added to constrain the action in RL, we propose a novel reward called TAR to stabilize the searching. Extensive experiments show that our SPEAR framework can effectively compress SNN under specific SynOps constraint.
△ Less
Submitted 28 June, 2025;
originally announced July 2025.
-
LLM-Driven Treatment Effect Estimation Under Inference Time Text Confounding
Authors:
Yuchen Ma,
Dennis Frauen,
Jonas Schweisthal,
Stefan Feuerriegel
Abstract:
Estimating treatment effects is crucial for personalized decision-making in medicine, but this task faces unique challenges in clinical practice. At training time, models for estimating treatment effects are typically trained on well-structured medical datasets that contain detailed patient information. However, at inference time, predictions are often made using textual descriptions (e.g., descri…
▽ More
Estimating treatment effects is crucial for personalized decision-making in medicine, but this task faces unique challenges in clinical practice. At training time, models for estimating treatment effects are typically trained on well-structured medical datasets that contain detailed patient information. However, at inference time, predictions are often made using textual descriptions (e.g., descriptions with self-reported symptoms), which are incomplete representations of the original patient information. In this work, we make three contributions. (1) We show that the discrepancy between the data available during training time and inference time can lead to biased estimates of treatment effects. We formalize this issue as an inference time text confounding problem, where confounders are fully observed during training time but only partially available through text at inference time. (2) To address this problem, we propose a novel framework for estimating treatment effects that explicitly accounts for inference time text confounding. Our framework leverages large language models together with a custom doubly robust learner to mitigate biases caused by the inference time text confounding. (3) Through a series of experiments, we demonstrate the effectiveness of our framework in real-world applications.
△ Less
Submitted 3 July, 2025;
originally announced July 2025.
-
Multi-Agent Reinforcement Learning for Dynamic Pricing in Supply Chains: Benchmarking Strategic Agent Behaviours under Realistically Simulated Market Conditions
Authors:
Thomas Hazenberg,
Yao Ma,
Seyed Sahand Mohammadi Ziabari,
Marijn van Rijswijk
Abstract:
This study investigates how Multi-Agent Reinforcement Learning (MARL) can improve dynamic pricing strategies in supply chains, particularly in contexts where traditional ERP systems rely on static, rule-based approaches that overlook strategic interactions among market actors. While recent research has applied reinforcement learning to pricing, most implementations remain single-agent and fail to…
▽ More
This study investigates how Multi-Agent Reinforcement Learning (MARL) can improve dynamic pricing strategies in supply chains, particularly in contexts where traditional ERP systems rely on static, rule-based approaches that overlook strategic interactions among market actors. While recent research has applied reinforcement learning to pricing, most implementations remain single-agent and fail to model the interdependent nature of real-world supply chains. This study addresses that gap by evaluating the performance of three MARL algorithms: MADDPG, MADQN, and QMIX against static rule-based baselines, within a simulated environment informed by real e-commerce transaction data and a LightGBM demand prediction model. Results show that rule-based agents achieve near-perfect fairness (Jain's Index: 0.9896) and the highest price stability (volatility: 0.024), but they fully lack competitive dynamics. Among MARL agents, MADQN exhibits the most aggressive pricing behaviour, with the highest volatility and the lowest fairness (0.5844). MADDPG provides a more balanced approach, supporting market competition (share volatility: 9.5 pp) while maintaining relatively high fairness (0.8819) and stable pricing. These findings suggest that MARL introduces emergent strategic behaviour not captured by static pricing rules and may inform future developments in dynamic pricing.
△ Less
Submitted 3 July, 2025;
originally announced July 2025.
-
ViRefSAM: Visual Reference-Guided Segment Anything Model for Remote Sensing Segmentation
Authors:
Hanbo Bi,
Yulong Xu,
Ya Li,
Yongqiang Mao,
Boyuan Tong,
Chongyang Li,
Chunbo Lang,
Wenhui Diao,
Hongqi Wang,
Yingchao Feng,
Xian Sun
Abstract:
The Segment Anything Model (SAM), with its prompt-driven paradigm, exhibits strong generalization in generic segmentation tasks. However, applying SAM to remote sensing (RS) images still faces two major challenges. First, manually constructing precise prompts for each image (e.g., points or boxes) is labor-intensive and inefficient, especially in RS scenarios with dense small objects or spatially…
▽ More
The Segment Anything Model (SAM), with its prompt-driven paradigm, exhibits strong generalization in generic segmentation tasks. However, applying SAM to remote sensing (RS) images still faces two major challenges. First, manually constructing precise prompts for each image (e.g., points or boxes) is labor-intensive and inefficient, especially in RS scenarios with dense small objects or spatially fragmented distributions. Second, SAM lacks domain adaptability, as it is pre-trained primarily on natural images and struggles to capture RS-specific semantics and spatial characteristics, especially when segmenting novel or unseen classes. To address these issues, inspired by few-shot learning, we propose ViRefSAM, a novel framework that guides SAM utilizing only a few annotated reference images that contain class-specific objects. Without requiring manual prompts, ViRefSAM enables automatic segmentation of class-consistent objects across RS images. Specifically, ViRefSAM introduces two key components while keeping SAM's original architecture intact: (1) a Visual Contextual Prompt Encoder that extracts class-specific semantic clues from reference images and generates object-aware prompts via contextual interaction with target images; and (2) a Dynamic Target Alignment Adapter, integrated into SAM's image encoder, which mitigates the domain gap by injecting class-specific semantics into target image features, enabling SAM to dynamically focus on task-relevant regions. Extensive experiments on three few-shot segmentation benchmarks, including iSAID-5$^i$, LoveDA-2$^i$, and COCO-20$^i$, demonstrate that ViRefSAM enables accurate and automatic segmentation of unseen classes by leveraging only a few reference images and consistently outperforms existing few-shot segmentation methods across diverse datasets.
△ Less
Submitted 3 July, 2025;
originally announced July 2025.
-
Template-Based Schema Matching of Multi-Layout Tenancy Schedules:A Comparative Study of a Template-Based Hybrid Matcher and the ALITE Full Disjunction Model
Authors:
Tim Uilkema,
Yao Ma,
Seyed Sahand Mohammadi Ziabari,
Joep van Vliet
Abstract:
The lack of standardized tabular formats for tenancy schedules across real estate firms creates significant inefficiencies in data integration. Existing automated integration methods, such as Full Disjunction (FD)-based models like ALITE, prioritize completeness but result in schema bloat, sparse attributes and limited business usability. We propose a novel hybrid, template-based schema matcher th…
▽ More
The lack of standardized tabular formats for tenancy schedules across real estate firms creates significant inefficiencies in data integration. Existing automated integration methods, such as Full Disjunction (FD)-based models like ALITE, prioritize completeness but result in schema bloat, sparse attributes and limited business usability. We propose a novel hybrid, template-based schema matcher that aligns multi-layout tenancy schedules to a predefined target schema. The matcher combines schema (Jaccard, Levenshtein) and instance-based metrics (data types, distributions) with globally optimal assignments determined via the Hungarian Algorithm. Evaluation against a manually labeled ground truth demonstrates substantial improvements, with grid search optimization yielding a peak F1-score of 0.881 and an overall null percentage of 45.7%. On a separate ground truth of 20 semantically similar column sets, ALITE achieves an F1-score of 0.712 and 75.6% nulls. These results suggest that combining structured business knowledge with hybrid matching can yield more usable and business-aligned schema mappings. The approach assumes cleanly extracted tabular input, future work could explore extending the matcher to support complex, composite tables.
△ Less
Submitted 2 July, 2025;
originally announced July 2025.
-
Exploring a Hybrid Deep Learning Approach for Anomaly Detection in Mental Healthcare Provider Billing: Addressing Label Scarcity through Semi-Supervised Anomaly Detection
Authors:
Samirah Bakker,
Yao Ma,
Seyed Sahand Mohammadi Ziabari
Abstract:
The complexity of mental healthcare billing enables anomalies, including fraud. While machine learning methods have been applied to anomaly detection, they often struggle with class imbalance, label scarcity, and complex sequential patterns. This study explores a hybrid deep learning approach combining Long Short-Term Memory (LSTM) networks and Transformers, with pseudo-labeling via Isolation Fore…
▽ More
The complexity of mental healthcare billing enables anomalies, including fraud. While machine learning methods have been applied to anomaly detection, they often struggle with class imbalance, label scarcity, and complex sequential patterns. This study explores a hybrid deep learning approach combining Long Short-Term Memory (LSTM) networks and Transformers, with pseudo-labeling via Isolation Forests (iForest) and Autoencoders (AE). Prior work has not evaluated such hybrid models trained on pseudo-labeled data in the context of healthcare billing. The approach is evaluated on two real-world billing datasets related to mental healthcare. The iForest LSTM baseline achieves the highest recall (0.963) on declaration-level data. On the operation-level data, the hybrid iForest-based model achieves the highest recall (0.744), though at the cost of lower precision. These findings highlight the potential of combining pseudo-labeling with hybrid deep learning in complex, imbalanced anomaly detection settings.
△ Less
Submitted 2 July, 2025;
originally announced July 2025.
-
Joint Power Control and Precoding for Cell-Free Massive MIMO Systems With Sparse Multi-Dimensional Graph Neural Networks
Authors:
Yukun Ma,
Jiayi Zhang,
Ziheng Liu,
Guowei Shi,
Bo Ai
Abstract:
Cell-free massive multiple-input multiple-output (CF mMIMO) has emerged as a prominent candidate for future networks due to its ability to significantly enhance spectral efficiency by eliminating inter-cell interference. However, its practical deployment faces considerable challenges, such as high computational complexity and the optimization of its complex processing. To address these challenges,…
▽ More
Cell-free massive multiple-input multiple-output (CF mMIMO) has emerged as a prominent candidate for future networks due to its ability to significantly enhance spectral efficiency by eliminating inter-cell interference. However, its practical deployment faces considerable challenges, such as high computational complexity and the optimization of its complex processing. To address these challenges, this correspondence proposes a framework based on a sparse multi-dimensional graph neural network (SP-MDGNN), which sparsifies the connections between access points (APs) and user equipments (UEs) to significantly reduce computational complexity while maintaining high performance. In addition, the weighted minimum mean square error (WMMSE) algorithm is introduced as a comparative method to further analyze the trade-off between performance and complexity. Simulation results demonstrate that the sparse method achieves an optimal balance between performance and complexity, significantly reducing the computational complexity of the original MDGNN method while incurring only a slight performance degradation, providing insights for the practical deployment of CF mMIMO systems in large-scale network.
△ Less
Submitted 2 July, 2025;
originally announced July 2025.
-
Autoregressive Image Generation with Linear Complexity: A Spatial-Aware Decay Perspective
Authors:
Yuxin Mao,
Zhen Qin,
Jinxing Zhou,
Hui Deng,
Xuyang Shen,
Bin Fan,
Jing Zhang,
Yiran Zhong,
Yuchao Dai
Abstract:
Autoregressive (AR) models have garnered significant attention in image generation for their ability to effectively capture both local and global structures within visual data. However, prevalent AR models predominantly rely on the transformer architectures, which are beset by quadratic computational complexity concerning input sequence length and substantial memory overhead due to the necessity o…
▽ More
Autoregressive (AR) models have garnered significant attention in image generation for their ability to effectively capture both local and global structures within visual data. However, prevalent AR models predominantly rely on the transformer architectures, which are beset by quadratic computational complexity concerning input sequence length and substantial memory overhead due to the necessity of maintaining key-value caches. Although linear attention mechanisms have successfully reduced this burden in language models, our initial experiments reveal that they significantly degrade image generation quality because of their inability to capture critical long-range dependencies in visual data. We propose Linear Attention with Spatial-Aware Decay (LASAD), a novel attention mechanism that explicitly preserves genuine 2D spatial relationships within the flattened image sequences by computing position-dependent decay factors based on true 2D spatial location rather than 1D sequence positions. Based on this mechanism, we present LASADGen, an autoregressive image generator that enables selective attention to relevant spatial contexts with linear complexity. Experiments on ImageNet show LASADGen achieves state-of-the-art image generation performance and computational efficiency, bridging the gap between linear attention's efficiency and spatial understanding needed for high-quality generation.
△ Less
Submitted 2 July, 2025;
originally announced July 2025.
-
Learning to Segment for Vehicle Routing Problems
Authors:
Wenbin Ouyang,
Sirui Li,
Yining Ma,
Cathy Wu
Abstract:
Iterative search heuristics are widely recognized as state-of-the-art for solving Vehicle Routing Problems (VRPs). In this work, we identify and exploit a critical observation: within these solvers, a large portion of the solution remains stable, i.e., unchanged across search iterations, causing redundant computations, especially for large-scale VRPs with long subtours. To address this, we pioneer…
▽ More
Iterative search heuristics are widely recognized as state-of-the-art for solving Vehicle Routing Problems (VRPs). In this work, we identify and exploit a critical observation: within these solvers, a large portion of the solution remains stable, i.e., unchanged across search iterations, causing redundant computations, especially for large-scale VRPs with long subtours. To address this, we pioneer the formal study of the First-Segment-Then-Aggregate (FSTA) decomposition technique to accelerate iterative solvers. Specifically, FSTA preserves stable solution segments during the search, aggregates nodes within each segment into fixed hypernodes, and focuses the search only on unstable portions. Yet, a key challenge lies in identifying which segments should be aggregated by FSTA. To this end, we then introduce Learning-to-Segment (L2Seg), a novel neural framework to intelligently differentiate potentially stable and unstable portions for FSTA decomposition. We present three L2Seg variants: non-autoregressive (globally comprehensive but locally indiscriminate), autoregressive (locally refined but globally deficient), and their synergy, with bespoke training and inference strategies. Empirical results on CVRP and VRPTW suggest that L2Seg accelerates state-of-the-art iterative solvers by up to 7x. Additionally, we provide in-depth analysis showing NAR and AR synergy achieves best performance by combining their complementary strengths. Notably, L2Seg is a flexible framework that is compatible with traditional, learning-based, and hybrid solvers, while supporting a broad class of VRPs.
△ Less
Submitted 22 June, 2025;
originally announced July 2025.
-
WebANNS: Fast and Efficient Approximate Nearest Neighbor Search in Web Browsers
Authors:
Mugeng Liu,
Siqi Zhong,
Qi Yang,
Yudong Han,
Xuanzhe Liu,
Yun Ma
Abstract:
Approximate nearest neighbor search (ANNS) has become vital to modern AI infrastructure, particularly in retrieval-augmented generation (RAG) applications. Numerous in-browser ANNS engines have emerged to seamlessly integrate with popular LLM-based web applications, while addressing privacy protection and challenges of heterogeneous device deployments. However, web browsers present unique challeng…
▽ More
Approximate nearest neighbor search (ANNS) has become vital to modern AI infrastructure, particularly in retrieval-augmented generation (RAG) applications. Numerous in-browser ANNS engines have emerged to seamlessly integrate with popular LLM-based web applications, while addressing privacy protection and challenges of heterogeneous device deployments. However, web browsers present unique challenges for ANNS, including computational limitations, external storage access issues, and memory utilization constraints, which state-of-the-art (SOTA) solutions fail to address comprehensively. We propose WebANNS, a novel ANNS engine specifically designed for web browsers. WebANNS leverages WebAssembly to overcome computational bottlenecks, designs a lazy loading strategy to optimize data retrieval from external storage, and applies a heuristic approach to reduce memory usage. Experiments show that WebANNS is fast and memory efficient, achieving up to $743.8\times$ improvement in 99th percentile query latency over the SOTA engine, while reducing memory usage by up to 39\%. Note that WebANNS decreases query time from 10 seconds to the 10-millisecond range in browsers, making in-browser ANNS practical with user-acceptable latency.
△ Less
Submitted 1 July, 2025; v1 submitted 1 July, 2025;
originally announced July 2025.