TY - GEN
T1 - Learning opposites using neural networks
AU - Kalra, Shivam
AU - Sriram, Aditya
AU - Rahnamayan, Shahryar
AU - Tizhoosh, H. R.
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/1/1
Y1 - 2016/1/1
N2 - Many research works have successfully extended algorithms such as evolutionary algorithms, reinforcement agents and neural networks using 'opposition-based learning' (OBL). Two types of the 'opposites' have been defined in the literature, namely type-I and type-II. The former are linear in nature and applicable to the variable space, hence easy to calculate. On the other hand, type-II opposites capture the 'oppositeness' in the output space. In fact, type-I opposites are considered a special case of type-II opposites where inputs and outputs have a linear relationship. However, in many real-world problems, inputs and outputs do in fact exhibit a nonlinear relationship. Therefore, type-II opposites are expected to be better in capturing the sense of 'opposition' in terms of the input-output relation. In the absence of any knowledge about the problem at hand, there seems to be no intuitive way to calculate the type-II opposites. In this paper, we introduce an approach to learn type-II opposites from the given inputs and their outputs using the artificial neural networks (ANNs). We first perform opposition mining on the sample data, and then use the mined data to learn the relationship between input x and its opposite x. We have validated our algorithm using various benchmark functions to compare it against an evolving fuzzy inference approach that has been recently introduced. The results show the better performance of a neural approach to learn the opposites. This will create new possibilities for integrating oppositional schemes within existing algorithms promising a potential increase in convergence speed and/or accuracy.
AB - Many research works have successfully extended algorithms such as evolutionary algorithms, reinforcement agents and neural networks using 'opposition-based learning' (OBL). Two types of the 'opposites' have been defined in the literature, namely type-I and type-II. The former are linear in nature and applicable to the variable space, hence easy to calculate. On the other hand, type-II opposites capture the 'oppositeness' in the output space. In fact, type-I opposites are considered a special case of type-II opposites where inputs and outputs have a linear relationship. However, in many real-world problems, inputs and outputs do in fact exhibit a nonlinear relationship. Therefore, type-II opposites are expected to be better in capturing the sense of 'opposition' in terms of the input-output relation. In the absence of any knowledge about the problem at hand, there seems to be no intuitive way to calculate the type-II opposites. In this paper, we introduce an approach to learn type-II opposites from the given inputs and their outputs using the artificial neural networks (ANNs). We first perform opposition mining on the sample data, and then use the mined data to learn the relationship between input x and its opposite x. We have validated our algorithm using various benchmark functions to compare it against an evolving fuzzy inference approach that has been recently introduced. The results show the better performance of a neural approach to learn the opposites. This will create new possibilities for integrating oppositional schemes within existing algorithms promising a potential increase in convergence speed and/or accuracy.
UR - http://www.scopus.com/inward/record.url?scp=85019106279&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85019106279&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2016.7899802
DO - 10.1109/ICPR.2016.7899802
M3 - Conference contribution
AN - SCOPUS:85019106279
T3 - Proceedings - International Conference on Pattern Recognition
SP - 1213
EP - 1218
BT - 2016 23rd International Conference on Pattern Recognition, ICPR 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd International Conference on Pattern Recognition, ICPR 2016
Y2 - 4 December 2016 through 8 December 2016
ER -