location: Current position: Home >> Scientific Research >> Paper Publications

Interpretability for Neural Networks from the Perspective of Probability Density

Hits:

Indexed by:会议论文

Date of Publication:2019-01-01

Included Journals:EI、CPCI-S

Page Number:1502-1507

Key Words:neural networks; interpretability; probability density; gaussian distribution

Abstract:Currently, most of works about interpretation of neural networks are to visually explain the features learned by hidden layers. This paper explores the relationship between the input units and the output units of neural network from the perspective of probability density. For classification problems, it shows that the probability density function (PDF) of the output unit can be expressed as a mixture of three Gaussian density functions whose mean and variance are related to the information of the input units, under the assumption that the input units are independent of each other and obey a Gaussian distribution. The experimental results show that the theoretical distribution of the output unit is basically consistent with the actual distribution.

Pre One:Binary Output Layer of Extreme Learning Machine for Solving Multi-class Classification Problems

Next One:A New Improved Learning Algorithm for Convolutional Neural Networks