Dynamic Neural Network Optimization: A single agent neuroevolution algorithm based on hill climbing optimization for Neural Architecture Search

Authors

  • Yoqsan Angeles
  • Valeria Karina Legaria-Santiago
  • Hiram Calvo
  • Álvaro Anzueto

DOI:

https://doi.org/10.61467/2007.1558.2025.v16i1.549

Keywords:

Artificial neural networks, hill climbing, metaheuristic, Artificial neural network, function approximation

Abstract

In the field of deep learning, the identification of optimal neural architectures requires not only profound expertise but also a substantial investment of time and effort in the evaluation of the outcomes generated by each proposed model. In this study, we introduce a Single Agent Neuroevolution algorithm, based on the Hill Climbing algorithm for Neural Architecture Search, named Dynamic Neural Network Optimization (DyNNO). This approach focuses on the evaluation of the performance of neural networks optimized for function approximation. Additionally, we have explored the minimization of the number of neurons within the neural network structures. The results demonstrated the feasibility of using this algorithm to automate the neural architecture search process. Furthermore, the reduction in the number of parameters improved the generalization capability of the networks. The findings also suggest that mutation in activation functions can be a factor to explore in achieving a more effective reduction in error rates

Downloads

Published

2025-03-18

How to Cite

Angeles, Y., Legaria-Santiago, V. K., Calvo, H., & Anzueto, Álvaro. (2025). Dynamic Neural Network Optimization: A single agent neuroevolution algorithm based on hill climbing optimization for Neural Architecture Search. International Journal of Combinatorial Optimization Problems and Informatics, 16(1), 151–163. https://doi.org/10.61467/2007.1558.2025.v16i1.549

Issue

Section

Articles