Special Sessions in WCCI/IJCNN2018, Brazil
- 发布时间:
- 2017-12-13
- 文章标题:
- Special Sessions in WCCI/IJCNN2018, Brazil
- 内容:
IJCNN-19 Advanced Cognitive Architectures for Machine Learning
Organized by Jose C. Principe (principe@cnel.ufl.edu), Badong Chen (chenbd@mail.xjtu.edu.cn)
Current work in machine learning treats perception of the real world as pattern recognition. While this has been shown possible in pre-defined domains, with the availability of large data sets and labels, it is unclear that the approach scales up to autonomous vision, where the complexity of the world may supersede the gains associated with the unreasonable effectiveness of data. Biological organisms evolve in an unknown and uncertain world by creating models of the environment, storing past solutions that worked and using this knowledge effectively in the future. It may be possible to achieve similar performance for autonomous vision applications if we rethink the architectures and models currently being utilized in deep learning to include more parsimonious architectures and encapsulate in mathematics the voluminous literature available in cognitive science.
Scope and Topics
The goal of this special session is to provide a forum for focused discussions on extensions of conventional neural network architectures for space and time processing (conv and recurrent nets), how to go beyond labels, how to organize the representations achieved in current deep learning models in ways that the system can use and generalize without resorting to retraining of all the parameters, etc.
The focus of this special session is to attract both solid contributions and preliminary results which show the potentiality and the limitations of new ideas, refinements, or crosslinkage among the different fields of machine learning, AI and cognitive sciences to solve real world problems.
Examples of these possible extensions are:
- Hierarchical Models for autonomous vision
- Bidirectional processing architectures
- Generative dynamical models of data
- Parsimonious factorization of space time joint distributions
- Incorporation of external memory in conventional machine learning architectures
- Alternative learning paradigms beyond backpropagation
- Self-learning and autonomous learning approaches
- Brain inspired learning
- Lifelong Learning
- Statistical syntactic approaches for machine learning




