keynote Speakers 2026
TO BE FINALIZED AT A LATER DATE
ICES will run two times a year at the University of Aizu with January & May Editions
TO BE FINALIZED AT A LATER DATE
Prof. Chandra, J, Christ University, India
AI Driven Multi-Model Classification for Emotional Intelligence
Emotional Intelligence plays a major role in human interaction, decision-making, and adaptive behaviour. Recent advancements in Artificial Intelligence have allowed multimodal approaches to classify and interpret emotional states by integrating diverse data sources such as text, speech, facial expressions, and physiological signals. Emotion identification by traditional approaches often struggles to capture different dimensions, leading to incomplete or inaccurate recognition. The user-specific characteristics, facial expressions, or speech signals can be influenced by situational context or social masking. These challenges underscore the need for advanced, multimodal systems that can integrate diverse data sources to achieve a comprehensive understanding of emotional states, enabling more reliable and practical emotion identification. Multimodal classification attained major attention in human-computer interaction, which is the driving force behind emotion identification. Human emotions can be identified through audio, video, text, speech, face, and body gestures. However, the physiological signals from the central nervous system can more precisely identify emotion than other modules. Physiological signals consist of brain and peripheral signals, and combinations of multimodal signals can easily understand the emotional state and activities. The multimodal approach demonstrates superiority over unimodal or bimodal approaches and is driven by the complexity and subtlety of emotion detection. Emotions are multifaceted, and they manifest through different physiological and behavioural signals that each offer unique but complementary information. The research focused on addressing the challenges of uni and bi-modal classification, like the inability to provide a comprehensive view of emotions and not capturing the multi-dimensional nature of emotions. The research aims to develop a new model that can support multiple data modalities to achieve a robust and comprehensive understanding of emotional states, reflected with high accuracy for classification. The new model explores Emotion Sensitive Channels, significantly improving the model’s predictive performance. Applications span healthcare, education, and human-computer interaction, while challenges such as cross-domain generalization, bias mitigation, and explainability remain open research areas. The new model also aims to advance emotionally intelligent AI systems for real-world deployment.
Speaker H-Index - 11
Prof. Michael Cohen, Professor, Higashi Nippon International University, Japan
Second Invention of Intelligence: Power, Promise, and Precarity
Generative AI represents a revolutionary shift in computation, exponentiating human creativity and productivity. Its disruptive potential extends through education, governance, society, geopolitics, and warfare, driven by the convergence of statistical and symbolic approaches and its diffusion through autonomous agents and embodied systems. This keynote considers how portrayals of AI and robots in science fiction anticipate plausible real-world trajectories and asks whether generative AI offers humanity a final opportunity for collective problem-solving, or a risky gamble with unpredictable and potentially catastrophic consequences. We will survey embodied agents in popular culture, review foundational principles of generative AI, and outline both transformative possibilities and existential risks. Participants will consider how emerging syntheses of human and artificial creativity may reshape technological development and influence the future of civilization.
Speaker H-Index - 22
SPECIAL INVITED SPEAKERS
TO BE ANNOUNCED AT A LATER DATE
© ETLTC & ACM Chapter on eLearning & Technical Communication: All Rights Reserved.