+
Skip to main content

Ensemble Learning for Multi-source Information Fusion

  • Chapter
Intelligent Autonomous Systems

Part of the book series: Studies in Computational Intelligence ((SCI,volume 275))

  • 1279 Accesses

  • 3 Citations

Abstract

In this chapter, we propose a new ensemble learning method. The main objective of this approach is to jointly use data-driven and knowledge-based submodels, like mathematical equations or rules, in the modeling process. The integration of knowledge-based submodels is of particular interest, since they are able to provide with information not contained in the data. On the other hand, data-driven models can complement the knowledge-based models with respect to input space coverage. For the task of appropriately integrating the different models, a method for partitioning the input space for the given models is introduced. Using that kind of ensembles, the advantages of both models are combined, i.e., robustness and physical transparency of the knowledge-based models and approximation abilities of the data-driven learning. The benefits of this approach are demonstrated for a real-world application.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Abonyi, J., Madar, J., Szeifert, F.: Combining first principles models and neural Networks for generic model control. Springer Engineering Series (2002)

    Google Scholar 

  2. Bengio, S., Marcel, C., Marcel, S., Mariéthoz, J.: Confidence measures for multimodal identity verification. Information Fusion 3, 267–276 (2002)

    Article  Google Scholar 

  3. Beyer, J., Heesche, K., Hauptmann, W., Otte, C.: Combined knowledge-based and data-driven modeling by heterogeneous mixture-of-experts. In: Mikut, R., Reischl, M. (Hrsg.) Workshop Computational Intelligence, vol. 18, pp. 225–236. Universitätsverlag Karlsruhe (2008)

    Google Scholar 

  4. Bishop, C.M., Svensén, M.: Bayesian hierarchical mixtures of experts. In: Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence, pp. 57–64 (2003)

    Google Scholar 

  5. Bloch, I.: Information Fusion in Signal and Image Processing: Major Probabilistic and Non-Probabilistic Numerical Approaches. John Wiley & Sons Inc., Chichester (2008)

    Book  Google Scholar 

  6. Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  7. Dasarathy, B.V.: Information fusion - what, where, why, when, and how? Information Fusion 2, 75–76 (2001)

    Article  Google Scholar 

  8. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society 39, 1–38 (1977)

    MATH  MathSciNet  Google Scholar 

  9. Durrant-Whyte, H.F.: Sensor models and multisensor integration. International Journal of Robotics Research 7, 97–113 (1988)

    Article  Google Scholar 

  10. Ferrari-Trecate, G., Muselli, M.: A new learning method for piecewise linear regression. In: Proceedings of International Conference on Artificial Neural Networks, pp. 444–449 (2002)

    Google Scholar 

  11. Freund, Y., Schapire, R.E.: Decision–theoretic generalization of on–line learning and an application to boosting. Journal of Computer and System Sciences 55, 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  12. Georgieva, P., de Azevedo, S.F.: Neural network-based control strategies applied to a fed-batch crystallization process. International Journal of Computational Intelligence 3, 224–233 (2006)

    Google Scholar 

  13. Hansen, L., Salamon, P.: Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence 12, 993–1001 (1990)

    Article  Google Scholar 

  14. Hashem, S.: Optimal linear combinations of neural networks. Neural Networks 10, 599–614 (1997)

    Article  Google Scholar 

  15. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  16. Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)

    Article  Google Scholar 

  17. Krogh, A., Vedelsby, J.: Neural networks ensembles, cross validation, and active learning. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 231–238. MIT Press, Cambridge (1995)

    Google Scholar 

  18. Quandt, R.E.: The estimation of the parameters of a linear regression system obeying two separate regimes. Journal of the American Statistical Association 53, 873–880 (1958)

    Article  MATH  MathSciNet  Google Scholar 

  19. Schapire, R.E.: The strength of weak learnability. IEEE Transactions on Pattern Analysis and Machine Intelligence 5, 197–227 (1990)

    Google Scholar 

  20. Schlang, M., Feldkeller, B., Lang, B., Poppe, T., Runkler, T.: Neural computation in steel industry. In: Proceedings European Control Conf. ECC 1999, VDI-Verlag (1999)

    Google Scholar 

  21. Titterington, D.M., Smith, A.F.M., Makov, U.E.: Statistical Analysis of Finite Mixture Distributions. John Wiley, New York (1985)

    MATH  Google Scholar 

  22. van Lith, P.F., Betlem, B.H.L., Roffel, B.: Combining prior knowledge with data driven modeling of a batch distillation column including start-up. Computers and Chemical Engineering 27, 1021–1030 (2003)

    Article  Google Scholar 

  23. Wald, L.: Some terms of reference in data fusion. IEEE Transactions on Geoscience and Remote Sensing 37, 1190–1193 (1999)

    Article  Google Scholar 

  24. Waterhouse, S., MacKay, D., Robinson, T.: Bayesian methods for mixture of experts. In: Advances of Neural Information Processing Systems, pp. 351–357 (1996)

    Google Scholar 

  25. White, F.E.: Data fusion lexicon. Joint Directors of Laboratories, Technical Panel of C3, Data Fusion Sub-Panel, Naval Ocean Systems Center, San Diego (1987)

    Google Scholar 

  26. Wolpert, D.H.: Stacked generalization. Neural Networks 5, 241–259 (1992)

    Article  Google Scholar 

  27. Xu, L.: RBF nets, mixture experts, and bayesian ying–yang learning. Neurocomputing 19, 223–257 (1998)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Beyer, J., Heesche, K., Hauptmann, W., Otte, C., Kruse, R. (2010). Ensemble Learning for Multi-source Information Fusion. In: Pratihar, D.K., Jain, L.C. (eds) Intelligent Autonomous Systems. Studies in Computational Intelligence, vol 275. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11676-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-11676-6_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-11675-9

  • Online ISBN: 978-3-642-11676-6

  • eBook Packages: EngineeringEngineering (R0)

Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Publish with us

Policies and ethics

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载