12, 13, and 14 September 2011
Kaiserslautern, Germany
Joint event: PSL 2011 (Workshop on Partially Supervised Learning)
Ulm (Germany), September 15-16, 2011 (SEE BELOW)
When facing difficult real-world applications, it is often unlikely that an individual
learning paradigm can actually yield
the solution sought (in spite of its theoretical generality) without a strong
co-operation with other, profoundly different modules building up the
overall system. For instance, artificial
neural networks are known to be mathematically "universal" machines, but
satisfactory solutions to complex tasks can hardly be achieved with a single
feed-forward connectionist architecture. Historically, this led to the
development of multiple neural network systems, namely mixtures of experts or
neural ensembles, taking benefit from the specialization of individual nets
over specific regions of the feature space, according to a divide-and-conquer
strategy. As an alternative, multiple classifier systems were proposed, aiming
at combining models that have different nature (e.g., generalized linear
discriminants, parametric probabilistic models, neural nets) or aim (e.g.,
estimating a discriminant function, or a class-posterior probability, or a
likelihood). In other circumstances, like in the case of hybrid hidden Markov
model/connectionist approaches, the combination between the underlying
paradigms relies on the idea of exploiting certain general properties
of one of them (e.g., the capability of modeling the long-term dependencies in
HMMs) with the strength of the other to accomplish local, specific tasks
that occur within the former (e.g., the capability of flexible, discriminative
modeling of the HMM emission probabilities via neural nets). Along a similar
direction, hybrid random fields were introduced recently, They combine the
overall, general structure of a Markov random field with the optimal fit of
conditional probabilities of individual variables given their Markov blanket
as obtained via Bayesian networks. Again, maximum echo-state likelihood
networks (MESLiN) were proposed for sequence processing, relying on the
combination of the reservoir of an echo-state architecture with a parametric
model of the probability density function of the states of the reservoir. Strictly related areas concern the
integration between symbolic and sub-symbolic learning machines, and the
so-called information-fusion. In all these scenarios, researchers are mostly
concerned with the development and investigation of plausible, mathematically
sound techniques for combining the different learners in a feasible, robust
manner (instead of just piling-up the different modules onto one
another heuristically). Such research efforts are leading to training algorithms that split properly the original learning
problem over the component machines, training the latter ones according to a
joint, global criterion which fits the solution of the original, overall problem.
Ain of this Invited Session is to bring together researchers involved in any
area of pattern recognition and machine learning that is related to these
issues. Fellow scientists are invited to submit their paper(s) to this Session, according to
the guidelines for Authors and the reviewing procedures which hold for the KES
Conference hosting this Session. Novel, fresh ideas are particularly welcome (even though in
preliminary form), although strong experimental analysis of established approaches to severe real-world tasks is encouraged as well.
Topics of interest include (but they are not limited to):
Please submit your paper through the KES submission system (PROSE), making sure you pick up the IS30 Invited Session item from the menu (NOTE: this item is listed in the "invited Sessions" table, not in the "General Sessions" list).