Previous Topic Back Forward Next Topic
Print Page Frank Dieterle
 
Ph. D. ThesisPh. D. Thesis 8. Results – Growing Neural Network Framework8. Results – Growing Neural Network Framework 8.1. Modifications of the Growing Neural Network Algorithm8.1. Modifications of the Growing Neural Network Algorithm
Home
News
About Me
Ph. D. Thesis
  Abstract
  Table of Contents
  1. Introduction
  2. Theory – Fundamentals of the Multivariate Data Analysis
  3. Theory – Quantification of the Refrigerants R22 and R134a: Part I
  4. Experiments, Setups and Data Sets
  5. Results – Kinetic Measurements
  6. Results – Multivariate Calibrations
  7. Results – Genetic Algorithm Framework
  8. Results – Growing Neural Network Framework
    8.1. Modifications of the Growing Neural Network Algorithm
    8.2. Application of the Growing Neural Networks
    8.3. Growing Neural Network Algorithm Frameworks
    8.4. Applications of the Growing Neural Network Frameworks
    8.5. Conclusions and Comparison of the Different Methods
  9. Results – All Data Sets
  10. Results – Various Aspects of the Frameworks and Measurements
  11. Summary and Outlook
  12. References
  13. Acknowledgements
Publications
Research Tutorials
Downloads and Links
Contact
Search
Site Map
Print this Page Print this Page

8.1.   Modifications of the Growing Neural Network Algorithm

The original algorithm was modified in several points, which were partly introduced and described in [28] and which are partly introduced in this work, to fit better to the needs of the calibration of sensor data sets and to improve the generalization ability of the networks built:

1.    Not only neurons with two input links and one output link but also neurons with one input link and one output link can be added to the network. In addition, links can be added between any neuron and a neuron of a proceeding layer ensuring that practically any feedforward network topology can be built. In contrast to the stepwise algorithms, the addition of neurons with two input links takes interactions of 2 variables during the addition step into account. Higher interactions can be modeled later by the addition of additional links.

2.      The estimation of the reduction of the calibration error was replaced by temporarily inserting a network element then training the network and subsequently predicting a monitor data set. This procedure is repeated for all possible locations and all possible elements. The type and the location where to insert the new element are decided by the maximum reduction of the prediction error of the monitor data not used for training. This ensures that the neural network not only approximates the calibration data well, but also primarily generalizes well. Using different data subsets for the calibration of the model (training data) and for the building of the model (monitor data) prevents the introduction of a bias demonstrated by Kupinsky et al. [11]. The change of the network topology by adding a network element between two training steps helps to escape local training minima similar to a random mutation of genetic algorithms.

3.      The stopping criterion of an absolute error limit for the algorithm was replaced by a stopping criterion of a relative minimal error decrease, which is independent from the scaling of the data sets. Thereby the insertion of the network elements is repeated until the insertion of a new network element improves the error of prediction less than this prescribed relative limit.

4.      The algorithm can start with nearly any arbitrary network topology, not only with an empty network. As the current implementation of the algorithm only supports networks with 1 output neuron, a separate network has to be used for each analyte.

Page 107 © Frank Dieterle, 03.03.2019 Navigation