WCRML 2019: Workshop on Crossmodal Learning and Application

Country: Canada

City: Ottawa

Abstr. due: 05.04.2019

Dates: 10.06.19 — 10.06.19

Area Of Sciences: Pedagogy;

Quick links:

Share with peers:

Organizing comittee e-mail: martin.klinkigt.ut@hitachi.co

Organizers: International Committee


The workshop crossmodal learning and application, puts the emphasis more on how different modalities semantically interact with each other, rather than simply learning with information integration from multiple modalities and retrieving them. The goal of this workshop is to address questions such as following

This workshop does not only attempt to leverage knowledge across modalities but also motivate their application in industry and society.

  • how to handle noise or imbalance in data and a small number of labelled samples for cross-modality data?
  • How to efficiently transfer knowledge from one modality with abundant supervision information to another modality with less or even no knowledge?
  • How to translate data across different modalities, e.g. the generation of motion-sensor data from visual input or visually indicated sound?
  • How to align cross-modal data by using appropriate alignment functions and similarity measurements
  • How to better utilise different modalities in an optimal way to satisfy requirements,which are sometimes even contradicting each other, like business demand, cost constraints and user satisfaction?
  • The sources of the multi modal data are not restricted in any way, which could be from users, devices, machines, systems and distributed environments.

To contribute to the understanding of cross-modal technologies, we invite original articles in relevant topics, which include but are not limited to

  • Multi modal representation/feature learning
  • Cross-modal retrieval
  • Data alignment across modalities, e.g., synchronising motion sensor with video
  • Data translation, e.g., visually indicated sound
  • Learning using side information, e.g., modality hallucination
  • Knowledge transfer across modalities, e.g., zero-shot/few-shot learning
  • Applications with cross-modal data inlcuding IoT (Internet of Things), operation and maintenance, surveillance, public transportation, logistics, health care, task-oriented dialog, human-robot interaction with vision and audio, user/product/job search and recommendation, social media retrieval and analysis etc.

We encourage submissions of both long and short papers.

Accepted long andshort papers will be designated as oral presentations and posters, respectively

Information source:https://crossmodallearning.github.io/

Conference Web-Site: https://easychair.org/cfp/WCRML2019