The 2015 Workshop on Vision and Language Integration 2015 (VL'15)

Країна: Португалія

Місто: Lisbon

Тези до: 28.06.2015

Дати: 17.09.15 — 21.09.15

Область наук: Філологічні;

Е-мейл Оргкомітету: https://sites.google.com/site/vl15workshop/organizers

Організатори: European Network on Integrating Vision and Language (iV&L Net).

 

Computational vision-language integration is a process of associating visual and corresponding linguistic pieces of information. Fragments of natural language, in the form of tags, captions, subtitles, surrounding text or audio, can aid the interpretation of image and video data by adding context or disambiguating visual appearance. In addition, labeled images are essential for training object or activity classifiers. On the other hand, by providing the contextual and word knowledge often implied to, but lacking in textual input, visual data can help resolve challenges in language processing such as word sense disambiguation, language understanding, machine translation and speech recognition. Moreover, sign language and gestures are languages that require visual interpretation. Since studying language and vision together can also provide new insight into cognition and universal representations of knowledge and meaning, the focus of researchers is increasingly turning towards models for grounding language in action and perception. There is a growing interest in the NLP, computer vision and cognitive science research on models that are capable to learn from and exploit multi-modal data, that is, to build semantic representations from both linguistic and visual or perceptual input.    The purpose of the VL'15 workshop is to bring together researchers from natural language processing, computer vision, human language technologies, computational linguistics, machine learning, representation learning, reasoning, cognitive science and application communities. The workshop will serve as a strong inter-disciplinary forum which will ignite fertilizing discussions and ideas on how to combine and integrate established techniques from different (but related) fields into new unified modeling approaches, as well as how to approach the problem of multi-modal data processing for NLP and vision from a completely new angle. The initiative on integrating vision and text will organically yield a better understanding of the nature and usability of vast multi-modal data available online and in other multi-modal information sources and repositories.   VL'15 Topics Topics of interest include, but are not limited to (in alphabetical order):
  • Assistive technologies
  • Automatic text illustration
  • Computational modeling of human vision and language
  • Computer graphics generation from text
  • Cross-media linking of entities, attributes, objects, events, and actions
  • Cross-media summarization
  • Facial animation from speech
  • Human-computer interaction in virtual worlds
  • Human-robot interaction
  • Image and video description and summarization
  • Image and video labeling and annotation
  • Image and video retrieval and multi-modal information retrieval
  • Language-driven animation
  • Machine translation with visual enhancement
  • Models of distributional semantics involving vision and language
  • Multi-modal discourse analysis
  • Multi-modal human-computer communication
  • Multi-modal temporal and spatial semantics 
  • Recognition of narratives in text and video
  • Recognition of semantic roles and frames in text, images and video
  • Retrieval models across different modalities
  • Visually grounded language understanding

Веб-сторінка конференції: https://sites.google.com/site/vl15workshop/

Конференції по темі - із близькими дедлайнами: