Neural and Evolutionary Computing
See recent articles
Showing new listings for Friday, 25 July 2025
- [1] arXiv:2507.17886 [pdf, html, other]
-
Title: Neuromorphic Computing: A Theoretical Framework for Time, Space, and Energy ScalingComments: True pre-print; to be submitted at future dateSubjects: Neural and Evolutionary Computing (cs.NE); Hardware Architecture (cs.AR); Distributed, Parallel, and Cluster Computing (cs.DC)
Neuromorphic computing (NMC) is increasingly viewed as a low-power alternative to conventional von Neumann architectures such as central processing units (CPUs) and graphics processing units (GPUs), however the computational value proposition has been difficult to define precisely.
Here, we explain how NMC should be seen as general-purpose and programmable even though it differs considerably from a conventional stored-program architecture. We show that the time and space scaling of NMC is equivalent to that of a theoretically infinite processor conventional system, however the energy scaling is significantly different. Specifically, the energy of conventional systems scales with absolute algorithm work, whereas the energy of neuromorphic systems scales with the derivative of algorithm state. The unique characteristics of NMC architectures make it well suited for different classes of algorithms than conventional multi-core systems like GPUs that have been optimized for dense numerical applications such as linear algebra. In contrast, the unique characteristics of NMC make it ideally suited for scalable and sparse algorithms whose activity is proportional to an objective function, such as iterative optimization and large-scale sampling (e.g., Monte Carlo). - [2] arXiv:2507.18179 [pdf, html, other]
-
Title: Explicit Sign-Magnitude Encoders Enable Power-Efficient MultipliersComments: Accepted and presented at the 34th International Workshop on Logic & Synthesis June 2025Subjects: Neural and Evolutionary Computing (cs.NE); Hardware Architecture (cs.AR); Performance (cs.PF)
This work presents a method to maximize power-efficiency of fixed point multiplier units by decomposing them into sub-components. First, an encoder block converts the operands from a two's complement to a sign magnitude representation, followed by a multiplier module which performs the compute operation and outputs the resulting value in the original format. This allows to leverage the power-efficiency of the Sign Magnitude encoding for the multiplication. To ensure the computing format is not altered, those two components are synthesized and optimized separately. Our method leads to significant power savings for input values centered around zero, as commonly encountered in AI workloads. Under a realistic input stream with values normally distributed with a standard deviation of 3.0, post-synthesis simulations of the 4-bit multiplier design show up to 12.9% lower switching activity compared to synthesis without decomposition. Those gains are achieved while ensuring compliance into any production-ready system as the overall circuit stays logic-equivalent. With the compliance lifted and a slightly smaller input range of -7 to +7, switching activity reductions can reach up to 33%. Additionally, we demonstrate that synthesis optimization methods based on switching-activity-driven design space exploration can yield a further 5-10% improvement in power-efficiency compared to a power agnostic approach.
- [3] arXiv:2507.18467 [pdf, html, other]
-
Title: Contraction, Criticality, and Capacity: A Dynamical-Systems Perspective on Echo-State NetworksComments: 10 pagesSubjects: Neural and Evolutionary Computing (cs.NE); Chaotic Dynamics (nlin.CD)
Echo-State Networks (ESNs) distil a key neurobiological insight: richly recurrent but fixed circuitry combined with adaptive linear read-outs can transform temporal streams with remarkable efficiency. Yet fundamental questions about stability, memory and expressive power remain fragmented across disciplines. We present a unified, dynamical-systems treatment that weaves together functional analysis, random attractor theory and recent neuroscientific findings. First, on compact multivariate input alphabets we prove that the Echo-State Property (wash-out of initial conditions) together with global Lipschitz dynamics necessarily yields the Fading-Memory Property (geometric forgetting of remote inputs). Tight algebraic tests translate activation-specific Lipschitz constants into certified spectral-norm bounds, covering both saturating and rectifying nonlinearities. Second, employing a Stone-Weierstrass strategy we give a streamlined proof that ESNs with polynomial reservoirs and linear read-outs are dense in the Banach space of causal, time-invariant fading-memory filters, extending universality to stochastic inputs. Third, we quantify computational resources via memory-capacity spectrum, show how topology and leak rate redistribute delay-specific capacities, and link these trade-offs to Lyapunov spectra at the \textit{edge of chaos}. Finally, casting ESNs as skew-product random dynamical systems, we establish existence of singleton pullback attractors and derive conditional Lyapunov bounds, providing a rigorous analogue to cortical criticality. The analysis yields concrete design rules-spectral radius, input gain, activation choice-grounded simultaneously in mathematics and neuroscience, and clarifies why modest-sized reservoirs often rival fully trained recurrent networks in practice.
New submissions (showing 3 of 3 entries)
- [4] arXiv:2507.18550 (cross-list from cs.AI) [pdf, html, other]
-
Title: On the Performance of Concept Probing: The Influence of the Data (Extended Version)Comments: Extended version of the paper published in Proceedings of the European Conference on Artificial Intelligence (ECAI 2025)Subjects: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)
Concept probing has recently garnered increasing interest as a way to help interpret artificial neural networks, dealing both with their typically large size and their subsymbolic nature, which ultimately renders them unfeasible for direct human interpretation. Concept probing works by training additional classifiers to map the internal representations of a model into human-defined concepts of interest, thus allowing humans to peek inside artificial neural networks. Research on concept probing has mainly focused on the model being probed or the probing model itself, paying limited attention to the data required to train such probing models. In this paper, we address this gap. Focusing on concept probing in the context of image classification tasks, we investigate the effect of the data used to train probing models on their performance. We also make available concept labels for two widely used datasets.