Previous Topic Back Forward Next Topic
Print Page Frank Dieterle
 
Ph. D. ThesisPh. D. Thesis 6. Results – Multivariate Calibrations6. Results – Multivariate Calibrations 6.9. PCA-NN6.9. PCA-NN
Home
News
About Me
Ph. D. Thesis
  Abstract
  Table of Contents
  1. Introduction
  2. Theory – Fundamentals of the Multivariate Data Analysis
  3. Theory – Quantification of the Refrigerants R22 and R134a: Part I
  4. Experiments, Setups and Data Sets
  5. Results – Kinetic Measurements
  6. Results – Multivariate Calibrations
    6.1. PLS Calibration
    6.2. Box-Cox Transformation + PLS
    6.3. INLR
    6.4. QPLS
    6.5. CART
    6.6. Model Trees
    6.7. MARS
    6.8. Neural Networks
    6.9. PCA-NN
    6.10. Neural Networks and Pruning
    6.11. Conclusions
  7. Results – Genetic Algorithm Framework
  8. Results – Growing Neural Network Framework
  9. Results – All Data Sets
  10. Results – Various Aspects of the Frameworks and Measurements
  11. Summary and Outlook
  12. References
  13. Acknowledgements
Publications
Research Tutorials
Downloads and Links
Contact
Search
Site Map
Print this Page Print this Page

6.9.   PCA-NN

The combination of a principal component analysis with neural networks is a fast and efficient way of compressing the information fed to the neural networks. Yet, the decision how many principal components to use for the neural networks remains a problem as this determines the extent of compression and the extent of information loss similar to the PLS (see section 2.5). Thus, neural networks with 6 hidden neurons in 1 hidden layer were trained with a systematically increasing number of principal components from 1 to 40 and the cross­validation error of the calibration data was determined. The optimal models in terms of lowest crossvalidation errors were obtained by networks using 25 principal components for R22 and 16 principal components for R134a. According to table 2, the prediction errors of the external validation data (2.16% for R22 and 3.24% for R134a) are practical identical with the fully connected neural networks. In addition, the true-predicted plots and the statistical tests are practically identical and will not be discussed any further here. The negligible gap between the calibration errors (1.98% for R22 and 3.08%) and the validation errors indicates that the reduction of the number of parameters (157 for R22 and 103 for R134a) successfully prevents an overfitting of the calibration data. Yet, the predictions of the validation data and with it the generalization ability are not significantly improved, which might be ascribed to some general drawbacks of the variable compression by the PCA already discussed in section 2.8.7.

Page 98 © Frank Dieterle, 03.03.2019 Navigation