Activation functions for feed forward artificial neural networks

Loading...
Thumbnail Image

Date

item.page.authors

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Functions of a particular class, when used at the hidden node layers of a Feedforward Artificial Neural Network (FFANN), allow the possession of universal approximation property by these networks. The theoretical results do not recommend any particular activation function. This is fuelling the search for optimal activation functions. In this work two new activation functions are proposed. On the basis of experimental results for the solution of learning tasks belonging to the domain of function approximation tasks and real life regression problems, it is demonstrated that the proposed activation function achieve lower training and generalization errors. That is, the proposed activation functions perform better than the four generally used activation functions (logistic / log-sigmoid function, hyperbolic tangent function, arc-tangent function and the softsign or the Elliott s function). There have been reported result in literature that adaptation of activation functions during training can reduce training time, training error and generalization error. A set of parametrized activation functions were identified. The number of parameters of the activation function is one. The proposed adaptive / auto-tuning activation function usage in FFANNs is compared with other adaptive activation function approaches. The proposed adaptive activation function mechanism is found to be better in terms of less training time, lower training and generalization errors... newline

Description

Keywords

Citation

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced