Underdamped Particle Swarm Optimization

By: Matías Ezequiel Hernández Rodríguez

This article presents Underdamped Particle Swarm Optimization (UEPS), a novel metaheuristic inspired by both the Particle Swarm Optimization (PSO) algorithm and the dynamic behavior of an underdamped system. The underdamped motion acts as an intermediate solution between undamped systems, which oscillate indefinitely, and overdamped systems, which stabilize without oscillation. In the context of optimization, this type of motion allows part... more
This article presents Underdamped Particle Swarm Optimization (UEPS), a novel metaheuristic inspired by both the Particle Swarm Optimization (PSO) algorithm and the dynamic behavior of an underdamped system. The underdamped motion acts as an intermediate solution between undamped systems, which oscillate indefinitely, and overdamped systems, which stabilize without oscillation. In the context of optimization, this type of motion allows particles to explore the search space dynamically, alternating between exploration and exploitation, with the ability to overshoot the optimal solution to explore new regions and avoid getting trapped in local optima. First, we review the concept of damped vibrations, an essential physical principle that describes how a system oscillates while losing energy over time, behaving in an underdamped, overdamped, or critically damped manner. This understanding forms the foundation for applying these concepts to optimization, ensuring a balanced management of exploration and exploitation. Furthermore, the classical PSO algorithm is discussed, highlighting its fundamental features and limitations, providing the necessary context to understand how the underdamped behavior improves PSO performance. The proposed metaheuristic is evaluated using benchmark functions and classic engineering problems, demonstrating its high robustness and efficiency. less
Adding numbers with spiking neural circuits on neuromorphic hardware

By: Oskar von Seeler, Elena C. Offenberg, Carlo Michaelis, Jannik Luboeinski, Andrew B. Lehr, Christian Tetzlaff

Progress in neuromorphic computing requires efficient implementation of standard computational problems, like adding numbers. Here we implement one sequential and two parallel binary adders in the Lava software framework, and deploy them to the neuromorphic chip Loihi 2. We describe the time complexity, neuron and synaptic resources, as well as constraints on the bit width of the numbers that can be added with the current implementations. F... more
Progress in neuromorphic computing requires efficient implementation of standard computational problems, like adding numbers. Here we implement one sequential and two parallel binary adders in the Lava software framework, and deploy them to the neuromorphic chip Loihi 2. We describe the time complexity, neuron and synaptic resources, as well as constraints on the bit width of the numbers that can be added with the current implementations. Further, we measure the time required for the addition operation on-chip. Importantly, we encounter trade-offs in terms of time complexity and required chip resources for the three considered adders. While sequential adders have linear time complexity $\bf\mathcal{O}(n)$ and require a linearly increasing number of neurons and synapses with number of bits $n$, the parallel adders have constant time complexity $\bf\mathcal{O}(1)$ and also require a linearly increasing number of neurons, but nonlinearly increasing synaptic resources (scaling with $\bf n^2$ or $\bf n \sqrt{n}$). This trade-off between compute time and chip resources may inform decisions in application development, and the implementations we provide may serve as a building block for further progress towards efficient neuromorphic algorithms. less
Fig Tree-Wasp Symbiotic Coevolutionary Optimization Algorithm

By: Anand J Kulkarni, Isha Purnapatre, Apoorva S Shastri

The nature inspired algorithms are becoming popular due to their simplicity and wider applicability. In the recent past several such algorithms have been developed. They are mainly bio-inspired, swarm based, physics based and socio-inspired; however, the domain based on symbiotic relation between creatures is still to be explored. A novel metaheuristic optimization algorithm referred to as Fig Tree-Wasp Symbiotic Coevolutionary (FWSC) algor... more
The nature inspired algorithms are becoming popular due to their simplicity and wider applicability. In the recent past several such algorithms have been developed. They are mainly bio-inspired, swarm based, physics based and socio-inspired; however, the domain based on symbiotic relation between creatures is still to be explored. A novel metaheuristic optimization algorithm referred to as Fig Tree-Wasp Symbiotic Coevolutionary (FWSC) algorithm is proposed. It models the symbiotic coevolutionary relationship between fig trees and wasps. More specifically, the mating of wasps, pollinating the figs, searching for new trees for pollination and wind effect drifting of wasps are modeled in the algorithm. These phenomena help in balancing the two important aspects of exploring the search space efficiently as well as exploit the promising regions. The algorithm is successfully tested on a variety of test problems. The results are compared with existing methods and algorithms. The Wilcoxon Signed Rank Test and Friedman Test are applied for the statistical validation of the algorithm performance. The algorithm is also further applied to solve the real-world engineering problems. The performance of the FWSC underscored that the algorithm can be applied to wider variety of real-world problems. less
($\boldsymbolθ_l, \boldsymbolθ_u$)-Parametric Multi-Task
  Optimization: Joint Search in Solution and Infinite Task Spaces

By: Tingyang Wei, Jiao Liu, Abhishek Gupta, Puay Siew Tan, Yew-Soon Ong

Multi-task optimization is typically characterized by a fixed and finite set of optimization tasks. The present paper relaxes this condition by considering a non-fixed and potentially infinite set of optimization tasks defined in a parameterized, continuous and bounded task space. We refer to this unique problem setting as parametric multi-task optimization (PMTO). Assuming the bounds of the task parameters to be ($\boldsymbol{\theta}_l$, $... more
Multi-task optimization is typically characterized by a fixed and finite set of optimization tasks. The present paper relaxes this condition by considering a non-fixed and potentially infinite set of optimization tasks defined in a parameterized, continuous and bounded task space. We refer to this unique problem setting as parametric multi-task optimization (PMTO). Assuming the bounds of the task parameters to be ($\boldsymbol{\theta}_l$, $\boldsymbol{\theta}_u$), a novel ($\boldsymbol{\theta}_l$, $\boldsymbol{\theta}_u$)-PMTO algorithm is crafted to enable joint search over tasks and their solutions. This joint search is supported by two approximation models: (1) for mapping solutions to the objective spaces of all tasks, which provably accelerates convergence by acting as a conduit for inter-task knowledge transfers, and (2) for probabilistically mapping tasks to the solution space, which facilitates evolutionary exploration of under-explored regions of the task space. At the end of a full ($\boldsymbol{\theta}_l$, $\boldsymbol{\theta}_u$)-PMTO run, the acquired models enable rapid identification of optimized solutions for any task lying within the specified bounds. This outcome is validated on both synthetic test problems and practical case studies, with the significant real-world applicability of PMTO shown towards fast reconfiguration of robot controllers under changing task conditions. The potential of PMTO to vastly speedup the search for solutions to minimax optimization problems is also demonstrated through an example in robust engineering design. less
A Grid Cell-Inspired Structured Vector Algebra for Cognitive Maps

By: Sven Krausse, Emre Neftci, Friedrich T. Sommer, Alpha Renner

The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells. This system is well-studied in neuroscience, and its efficiency and versatility make it attractive for applications in robotics and machine learning. While continuous attractor networks (CANs) successfully model entorhinal grid cells for encoding physical space, integrating both continuous spatial and a... more
The entorhinal-hippocampal formation is the mammalian brain's navigation system, encoding both physical and abstract spaces via grid cells. This system is well-studied in neuroscience, and its efficiency and versatility make it attractive for applications in robotics and machine learning. While continuous attractor networks (CANs) successfully model entorhinal grid cells for encoding physical space, integrating both continuous spatial and abstract spatial computations into a unified framework remains challenging. Here, we attempt to bridge this gap by proposing a mechanistic model for versatile information processing in the entorhinal-hippocampal formation inspired by CANs and Vector Symbolic Architectures (VSAs), a neuro-symbolic computing framework. The novel grid-cell VSA (GC-VSA) model employs a spatially structured encoding scheme with 3D neuronal modules mimicking the discrete scales and orientations of grid cell modules, reproducing their characteristic hexagonal receptive fields. In experiments, the model demonstrates versatility in spatial and abstract tasks: (1) accurate path integration for tracking locations, (2) spatio-temporal representation for querying object locations and temporal relations, and (3) symbolic reasoning using family trees as a structured test case for hierarchical relationships. less
Speeding up Local Search for the Indicator-based Subset Selection
  Problem by a Candidate List Strategy

By: Keisuke Korogi, Ryoji Tanabe

In evolutionary multi-objective optimization, the indicator-based subset selection problem involves finding a subset of points that maximizes a given quality indicator. Local search is an effective approach for obtaining a high-quality subset in this problem. However, local search requires high computational cost, especially as the size of the point set and the number of objectives increase. To address this issue, this paper proposes a cand... more
In evolutionary multi-objective optimization, the indicator-based subset selection problem involves finding a subset of points that maximizes a given quality indicator. Local search is an effective approach for obtaining a high-quality subset in this problem. However, local search requires high computational cost, especially as the size of the point set and the number of objectives increase. To address this issue, this paper proposes a candidate list strategy for local search in the indicator-based subset selection problem. In the proposed strategy, each point in a given point set has a candidate list. During search, each point is only eligible to swap with unselected points in its associated candidate list. This restriction drastically reduces the number of swaps at each iteration of local search. We consider two types of candidate lists: nearest neighbor and random neighbor lists. This paper investigates the effectiveness of the proposed candidate list strategy on various Pareto fronts. The results show that the proposed strategy with the nearest neighbor list can significantly speed up local search on continuous Pareto fronts without significantly compromising the subset quality. The results also show that the sequential use of the two lists can address the discontinuity of Pareto fronts. less
4 SciCasts by .
Goat Optimization Algorithm: A Novel Bio-Inspired Metaheuristic for
  Global Optimization

By: Hamed Nozari, Hoessein Abdi, Agnieszka Szmelter-Jarosz

This paper presents the Goat Optimization Algorithm (GOA), a novel bio-inspired metaheuristic optimization technique inspired by goats' adaptive foraging, strategic movement, and parasite avoidance behaviors.GOA is designed to balance exploration and exploitation effectively by incorporating three key mechanisms, adaptive foraging for global search, movement toward the best solution for local refinement, and a jump strategy to escape local ... more
This paper presents the Goat Optimization Algorithm (GOA), a novel bio-inspired metaheuristic optimization technique inspired by goats' adaptive foraging, strategic movement, and parasite avoidance behaviors.GOA is designed to balance exploration and exploitation effectively by incorporating three key mechanisms, adaptive foraging for global search, movement toward the best solution for local refinement, and a jump strategy to escape local optima.A solution filtering mechanism is introduced to enhance robustness and maintain population diversity. The algorithm's performance is evaluated on standard unimodal and multimodal benchmark functions, demonstrating significant improvements over existing metaheuristics, including Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), Genetic Algorithm (GA), Whale Optimization Algorithm (WOA), and Artificial Bee Colony (ABC). Comparative analysis highlights GOA's superior convergence rate, enhanced global search capability, and higher solution accuracy.A Wilcoxon rank-sum test confirms the statistical significance of GOA's exceptional performance. Despite its efficiency, computational complexity and parameter sensitivity remain areas for further optimization. Future research will focus on adaptive parameter tuning, hybridization with other metaheuristics, and real-world applications in supply chain management, bioinformatics, and energy optimization. The findings suggest that GOA is a promising advancement in bio-inspired optimization techniques. less
Waves and symbols in neuromorphic hardware: from analog signal
  processing to digital computing on the same computational substrate

By: Dmitrii Zendrikov, Alessio Franci, Giacomo Indiveri

Neural systems use the same underlying computational substrate to carry out analog filtering and signal processing operations, as well as discrete symbol manipulation and digital computation. Inspired by the computational principles of canonical cortical microcircuits, we propose a framework for using recurrent spiking neural networks to seamlessly and robustly switch between analog signal processing and categorical and discrete computation... more
Neural systems use the same underlying computational substrate to carry out analog filtering and signal processing operations, as well as discrete symbol manipulation and digital computation. Inspired by the computational principles of canonical cortical microcircuits, we propose a framework for using recurrent spiking neural networks to seamlessly and robustly switch between analog signal processing and categorical and discrete computation. We provide theoretical analysis and practical neural network design tools to formally determine the conditions for inducing this switch. We demonstrate the robustness of this framework experimentally with hardware soft Winner-Take-All and mixed-feedback recurrent spiking neural networks, implemented by appropriately configuring the analog neuron and synapse circuits of a mixed-signal neuromorphic processor chip. less
Modified FOX Optimizer for Solving optimization problems

By: Dler O. Hasan, Hardi M. Mohammed, Zrar Khalid Abdul

The FOX optimizer, inspired by red fox hunting behavior, is a powerful algorithm for solving real-world and engineering problems. However, despite balancing exploration and exploitation, it can prematurely converge to local optima, as agent positions are updated solely based on the current best-known position, causing all agents to converge on one location. This study proposes the modified FOX optimizer (mFOX) to enhance exploration and bal... more
The FOX optimizer, inspired by red fox hunting behavior, is a powerful algorithm for solving real-world and engineering problems. However, despite balancing exploration and exploitation, it can prematurely converge to local optima, as agent positions are updated solely based on the current best-known position, causing all agents to converge on one location. This study proposes the modified FOX optimizer (mFOX) to enhance exploration and balance exploration and exploitation in three steps. First, the Oppositional-Based Learning (OBL) strategy is used to improve the initial population. Second, control parameters are refined to achieve a better balance between exploration and exploitation. Third, a new update equation is introduced, allowing agents to adjust their positions relative to one another rather than relying solely on the best-known position. This approach improves exploration efficiency without adding complexity. The mFOX algorithm's performance is evaluated against 12 well-known algorithms on 23 classical benchmark functions, 10 CEC2019 functions, and 12 CEC2022 functions. It outperforms competitors in 74% of the classical benchmarks, 60% of the CEC2019 benchmarks, and 58% of the CEC2022 benchmarks. Additionally, mFOX effectively addresses four engineering problems. These results demonstrate mFOX's strong competitiveness in solving complex optimization tasks, including unimodal, constrained, and high-dimensional problems. less
Towards 3D Acceleration for low-power Mixture-of-Experts and Multi-Head
  Attention Spiking Transformers

By: Boxun Xu, Junyoung Hwang, Pruek Vanna-iampikul, Yuxuan Yin, Sung Kyu Lim, Peng Li

Spiking Neural Networks(SNNs) provide a brain-inspired and event-driven mechanism that is believed to be critical to unlock energy-efficient deep learning. The mixture-of-experts approach mirrors the parallel distributed processing of nervous systems, introducing conditional computation policies and expanding model capacity without scaling up the number of computational operations. Additionally, spiking mixture-of-experts self-attention mec... more
Spiking Neural Networks(SNNs) provide a brain-inspired and event-driven mechanism that is believed to be critical to unlock energy-efficient deep learning. The mixture-of-experts approach mirrors the parallel distributed processing of nervous systems, introducing conditional computation policies and expanding model capacity without scaling up the number of computational operations. Additionally, spiking mixture-of-experts self-attention mechanisms enhance representation capacity, effectively capturing diverse patterns of entities and dependencies between visual or linguistic tokens. However, there is currently a lack of hardware support for highly parallel distributed processing needed by spiking transformers, which embody a brain-inspired computation. This paper introduces the first 3D hardware architecture and design methodology for Mixture-of-Experts and Multi-Head Attention spiking transformers. By leveraging 3D integration with memory-on-logic and logic-on-logic stacking, we explore such brain-inspired accelerators with spatially stackable circuitry, demonstrating significant optimization of energy efficiency and latency compared to conventional 2D CMOS integration. less
4 SciCasts by .
Trimming Down Large Spiking Vision Transformers via Heterogeneous
  Quantization Search

By: Boxun Xu, Yufei Song, Peng Li

Spiking Neural Networks (SNNs) are amenable to deployment on edge devices and neuromorphic hardware due to their lower dissipation. Recently, SNN-based transformers have garnered significant interest, incorporating attention mechanisms akin to their counterparts in Artificial Neural Networks (ANNs) while demonstrating excellent performance. However, deploying large spiking transformer models on resource-constrained edge devices such as mobi... more
Spiking Neural Networks (SNNs) are amenable to deployment on edge devices and neuromorphic hardware due to their lower dissipation. Recently, SNN-based transformers have garnered significant interest, incorporating attention mechanisms akin to their counterparts in Artificial Neural Networks (ANNs) while demonstrating excellent performance. However, deploying large spiking transformer models on resource-constrained edge devices such as mobile phones, still poses significant challenges resulted from the high computational demands of large uncompressed high-precision models. In this work, we introduce a novel heterogeneous quantization method for compressing spiking transformers through layer-wise quantization. Our approach optimizes the quantization of each layer using one of two distinct quantization schemes, i.e., uniform or power-of-two quantification, with mixed bit resolutions. Our heterogeneous quantization demonstrates the feasibility of maintaining high performance for spiking transformers while utilizing an average effective resolution of 3.14-3.67 bits with less than a 1% accuracy drop on DVS Gesture and CIFAR10-DVS datasets. It attains a model compression rate of 8.71x-10.19x for standard floating-point spiking transformers. Moreover, the proposed approach achieves a significant energy reduction of 5.69x, 8.72x, and 10.2x while maintaining high accuracy levels of 85.3%, 97.57%, and 80.4% on N-Caltech101, DVS-Gesture, and CIFAR10-DVS datasets, respectively. less
JPC: Flexible Inference for Predictive Coding Networks in JAX

By: Francesco Innocenti, Paul Kinghorn, Will Yun-Farmbrough, Miguel De Llanza Varona, Ryan Singh, Christopher L. Buckley

We introduce JPC, a JAX library for training neural networks with Predictive Coding. JPC provides a simple, fast and flexible interface to train a variety of PC networks (PCNs) including discriminative, generative and hybrid models. Unlike existing libraries, JPC leverages ordinary differential equation solvers to integrate the gradient flow inference dynamics of PCNs. We find that a second-order solver achieves significantly faster runtime... more
We introduce JPC, a JAX library for training neural networks with Predictive Coding. JPC provides a simple, fast and flexible interface to train a variety of PC networks (PCNs) including discriminative, generative and hybrid models. Unlike existing libraries, JPC leverages ordinary differential equation solvers to integrate the gradient flow inference dynamics of PCNs. We find that a second-order solver achieves significantly faster runtimes compared to standard Euler integration, with comparable performance on a range of tasks and network depths. JPC also provides some theoretical tools that can be used to study PCNs. We hope that JPC will facilitate future research of PC. The code is available at https://github.com/thebuckleylab/jpc. less
Un-evaluated Solutions May Be Valuable in Expensive Optimization

By: Hao Hao, Xiaoqun Zhang, Aimin Zhou

Expensive optimization problems (EOPs) are prevalent in real-world applications, where the evaluation of a single solution requires a significant amount of resources. In our study of surrogate-assisted evolutionary algorithms (SAEAs) in EOPs, we discovered an intriguing phenomenon. Because only a limited number of solutions are evaluated in each iteration, relying solely on these evaluated solutions for evolution can lead to reduced dispari... more
Expensive optimization problems (EOPs) are prevalent in real-world applications, where the evaluation of a single solution requires a significant amount of resources. In our study of surrogate-assisted evolutionary algorithms (SAEAs) in EOPs, we discovered an intriguing phenomenon. Because only a limited number of solutions are evaluated in each iteration, relying solely on these evaluated solutions for evolution can lead to reduced disparity in successive populations. This, in turn, hampers the reproduction operators' ability to generate superior solutions, thereby reducing the algorithm's convergence speed. To address this issue, we propose a strategic approach that incorporates high-quality, un-evaluated solutions predicted by surrogate models during the selection phase. This approach aims to improve the distribution of evaluated solutions, thereby generating a superior next generation of solutions. This work details specific implementations of this concept across various reproduction operators and validates its effectiveness using multiple surrogate models. Experimental results demonstrate that the proposed strategy significantly enhances the performance of surrogate-assisted evolutionary algorithms. Compared to mainstream SAEAs and Bayesian optimization algorithms, our approach incorporating the un-evaluated solution strategy shows a marked improvement. less
Can Large Language Models Be Trusted as Black-Box Evolutionary
  Optimizers for Combinatorial Problems?

By: Jie Zhao, Tao Wen, Kang Hao Cheong

Evolutionary computation excels in complex optimization but demands deep domain knowledge, restricting its accessibility. Large Language Models (LLMs) offer a game-changing solution with their extensive knowledge and could democratize the optimization paradigm. Although LLMs possess significant capabilities, they may not be universally effective, particularly since evolutionary optimization encompasses multiple stages. It is therefore imper... more
Evolutionary computation excels in complex optimization but demands deep domain knowledge, restricting its accessibility. Large Language Models (LLMs) offer a game-changing solution with their extensive knowledge and could democratize the optimization paradigm. Although LLMs possess significant capabilities, they may not be universally effective, particularly since evolutionary optimization encompasses multiple stages. It is therefore imperative to evaluate the suitability of LLMs as evolutionary optimizer (EVO). Thus, we establish a series of rigid standards to thoroughly examine the fidelity of LLM-based EVO output in different stages of evolutionary optimization and then introduce a robust error-correction mechanism to mitigate the output uncertainty. Furthermore, we explore a cost-efficient method that directly operates on entire populations with excellent effectiveness in contrast to individual-level optimization. Through extensive experiments, we rigorously validate the performance of LLMs as operators targeted for combinatorial problems. Our findings provide critical insights and valuable observations, advancing the understanding and application of LLM-based optimization. less