Call for special session papers
Beyond semantics: multimodal understanding of subjective properties

This Special Session aims to gather high-quality contributions on the latest systems and applications of multimedia analysis for subjective property (SP) understanding, detection and retrieval. In a nutshell, the focus of the special session is on computational methods to learn, infer or retrieve SP from multimodal data and their applications (e.g., SP-based advertising, retrieval and search). More specifically, the topics of the special session include (but are not limited to):

  • Data collection/annotation and evaluation methods for SP studies, including active learning and crowdsourcing
  • Computational models for individual SP detection in multimedia data, including beauty, sentiment, interestingness, memorability, creativity, ambiance
  • Computational models for connected SP detection in multimedia data, including virality, popularity, engagement
  • User diversity-aware models for individual and collective SP detection and retrieval
  • Applications of SP detection and retrieval methods, including advertising, retrieval, search
  • Applications of SP detection and retrieval methods in new contexts such as social good and urban spaces.

Maximum Length of a Paper

Each full paper should not be longer than 6 pages.

Important Dates

Paper Submission: February 28 March 7, 2017 at 23:59 EET closed
Notification of Acceptance: March 29, 2017
Camera-Ready Papers Due: April 26 April 23, 2017

Single-Blind Review

ICMR will use a single-blind review process for special session paper selection. Authors should provide author names and affiliations in their manuscript.

Abstract and Keywords

The abstract and the keywords form the primary source for assigning papers to reviewers. So make sure that they form a concise and complete summary of your paper with sufficient information to let someone who doesn’t read the full paper know what it is about.

Submission Instructions

See the Paper submission section (click on the link).


Xavier Alameda-Pineda, INRIA Grenoble (contact person: )
Miriam Redi, Bell Labs Cambridge
Mohammad Soleymani, Swiss Center for Affective Sciences
Nicu Sebe, University of Trento
Shih-Fu Chang, Columbia University

Association for Computing Machinery

University Politehnica of Bucharest

University of Trento