CLOSED: New Signals in Multimedia Systems and Applications: Sensing and Understanding Human Behavior and Interactions – Call for Papers

Facebooktwitterredditlinkedintumblrmail

Guest Editors

Pablo Cesar, CWI and TU Delft, The Netherlands
Vivek Singh, Rutgers University
Ramesh Jain, University of California-Irvine
Nicu Sebe, University of Trento, Italy
Nuria Oliver, Telefonica, Spain

Submission deadline: CLOSED
Publication:
January-March 2018

The pervasive deployment of new signal capturing devices (social media, biosensors) is reshaping the way we infer context, reason about it, and make decisions. Recent developments on mobile, sensor, and wearable technology are making available a plethora of new signals unthinkable before, with the potential of enabling truly personalized and enhanced media-based experiences. Once the realm of audiovisual data, multimedia computing is finally embracing its “multi” nature, promoting research on the impact of other modalities and medias. Sensors, in the broadest sense, will have a profound impact, shaking up the foundations of the field. This special issue intends to explore how new sensor technology (from social sensors to biosensors) will affect multimedia systems and applications. It will focus in particular on novel and groundbreaking research from the multimedia community on sensing, understanding, and reacting to (personalization, for example) the user experience.

The objective of this special issue is to revisit how sensor technology is transforming the way context and human behavior is reasoned about, enabling personal and enriched media-experiences. In particular, we are interested in the different stages of the life cycle from the moment multimodal signals are captured and processed, to the inferring of meaning, to the reaction of the environment. The number of existing sensors and promises surrounding them constantly increase. Some examples include social sensors, accessories such as watches, and biosensors embedded in textiles. They provide a plethora of multimodal raw data that needs to be processed and analyzed, yielding actionable insights out of them. Sensors provide immense amounts of data at very high rates. Making sense of such timed streams of information is extremely complex, and new algorithms are probably required. These algorithms should effectively fuse data coming from different sensors to acquire a better understanding of the user and her situation. We know, based on previous attempts, that it is extremely complicated to build such algorithms to robustly run in the wild. The final stage after sensing and understanding, reaction, refers to the reactivity of the environment (the delivered multimedia content, the surroundings of the user, or the network used for communication). This piece of the puzzle provides the adaptation techniques that will improve the quality of the user experience.

Topics of interest include, but are not limited to, the following:

  • New multimedia capturing devices
  • Raw data acquisition and processing
  • Multimodal fusion and data analysis for new types of signals
  • Context understanding
  • Emotional and social signals in multimedia
  • Understanding user behavior in multimodal environments
  • Media (and environment) adaptation and personalization

The perspectives of interest include:

  • Multimedia systems
  • Architectural issues
  • Algorithms
  • HCI and quality of experience
  • Computational social science

Submission Information

Submissions should be no more than 6,500 words, including all text, the abstract, keywords, bibliography, biographies, and table text. Each table and figure counts for 200 words. For general author guidelines, see www.computer.org/web/peer-review/magazines.

Submit your article to our online peer-review system. Go to ScholarOne Manuscripts (https://mc.manuscriptcentral.com/mm-cs) to submit.

Questions?

Contact the guest editors at mm1-2018@computer.org.