Dynamic Neural Network Optimization: A single agent neuroevolution algorithm based on hill climbing optimization for Neural Architecture Search
DOI:
https://doi.org/10.61467/2007.1558.2025.v16i1.549Keywords:
Artificial neural networks, hill climbing, metaheuristic, Artificial neural network, function approximationAbstract
In the field of deep learning, the identification of optimal neural architectures requires not only profound expertise but also a substantial investment of time and effort in the evaluation of the outcomes generated by each proposed model. In this study, we introduce a Single Agent Neuroevolution algorithm, based on the Hill Climbing algorithm for Neural Architecture Search, named Dynamic Neural Network Optimization (DyNNO). This approach focuses on the evaluation of the performance of neural networks optimized for function approximation. Additionally, we have explored the minimization of the number of neurons within the neural network structures. The results demonstrated the feasibility of using this algorithm to automate the neural architecture search process. Furthermore, the reduction in the number of parameters improved the generalization capability of the networks. The findings also suggest that mutation in activation functions can be a factor to explore in achieving a more effective reduction in error rates
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Combinatorial Optimization Problems and Informatics

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.