Information Theoretic Learning Tsallis and Kapur Entropy and its Applications to Adaptive System Training
Loading...
Date
item.page.authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
An information-theoretic measure provides the mathematical background to various concepts for decision-making. Entropy is a fundamental concept to determine the uncertainty associated with a random variable. Adaptation learning is a data-driven technique that works on the experiences made during data analysis. In the world of technology, some robust techniques are required to simplify the topologies of adaptive systems. Some gaps have been identified in this existing research work, and some useful findings have been proved with generalized information-theoretic measures. If exact computation is expensive, replace it with a cheaper estimate. Therefore, this work explored the problem of constructing kernel-based density estimators and their application in adaptive systems training. In KDE, a PDF of continuous random variables has been estimated using the kernel function to offer an understanding of the distribution of data points in a dataset. The main focus of the research is to propose kernel density estimators for Tsallis entropy of order and Kapur entropy of order and type using the Parzen-Rosenblatt window. Tsallis entropy-based estimator and Kapur entropy of order and type entropy estimator optimize feature parameters, and the results are presented in the theorems (2.1-2.4 and 3.1-3.4) and properties (2.1-2.4 and 3.1-3.4). These results have applications in statistical noise rejection in adaptation systems. These density estimators are criteria for minimum entropy to blind deconvolution problems. Minimum entropy deconvolution for Tsallis entropy and Kapur entropy of order and type have been proved in the theorems (4.1-4.2) followed by the corollaries (4.1-4.2). The derivations of the stochastic gradient for Tsallis entropy and Kapur entropy of order and type estimators have been discussed in chapter 5. From the results, it is observed that SG for Tsallis entropy is the (IP + 1) times the Shannon entropy or Renyis entropy. In contrast, the SG for the Kapur entropy of order and type is the same as the Shannon entropy or Renyis entropy. These results are helpful in solving supervised adaption problems using the MEE criterion.
newline