[CFP] - FIRST INTERNATIONAL WORKSHOP IN MULTIMEDIA PRAGMATICS (MMPrag'18)

0 views
Skip to first unread message

Terry Ruas

unread,
Dec 7, 2017, 8:13:05 AM12/7/17
to
Sorry for duplicate messages that might happen.
T.

====================================================================
CALL FOR PAPERS
FIRST INTERNATIONAL WORKSHOP IN MULTIMEDIA PRAGMATICS (MMPrag'18)
Co-Located with the IEEE FIRST INTERNATIONAL CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR'18)
APRIL 10-12, 2018, MIAMI, FLORIDA

Website: http://mipr.sigappfr.org
====================================IMPORTANT DATES====================================================================

Submissions due: December 20, 2017
Acceptance notification: January 10, 2018
Camera-ready: January 20, 2018
Workshop date: April 10, 2018

=======================================DESCRIPTION=====================================================================
Most multimedia objects are spatio-temporal simulacrums of the real world. This supports our view that the next
grand challenge for our community will be understanding and formally modeling the flow of life around us, over
many modalities and scales. As technology advances, the nature of these simulacrums will evolve as well, becoming
more detailed and revealing to us more information concerning the nature of reality.

Currently, IoT is the state-of-the-art organizational approach to construct complex representations of the flow
of life around us. Various, perhaps pervasive, sensors, working collectively, will broadcast to us representations
of real events in real time. It will be our task to continuously extract the semantics of these representations and
possibly react to them by injecting some response actions into the mix to ensure some desired outcome.

Pragmatics studies context and how it affects meaning, and context is usually culturally, socially, and
historically based. For example, pragmatics would encompass the speaker’s intent, body language, and penchant
for sarcasm, as well as other signs, usually culturally based, such as the speaker’s type of clothing, which
could influence a statement’s meaning. Generic signal/sensor-based retrieval should also use syntactical,
semantic, and pragmatics-based approaches. If we are to understand and model the flow of life around us, this
will be a necessity.

Our community has successfully developed various approaches to decode the syntax and semantics of these artifacts.
The development of techniques that use contextual information is in its infancy, however. With the expansion of the
data horizon, through the ever-increasing use of metadata, we can certainly put all media on more equal footing.

The NLP community has its own set of approaches in semantics and pragmatics. Natural language is certainly an excellent
exemplar of multimedia, and the use of audio and text features has played a part in the development of our field.

However, if we are to develop more unified approaches to modeling the flow of life around us, both of our communities
can certainly benefit by examining in detail what the other can offer. Many approaches are the same, but many are
different. Certainly, the research in many areas, such as word2vec, from the NLP community can have a positive benefit
to the multimedia community.

Now is the perfect time to actively promote this cross-fertilization of our ideas to solve some very hard and important
problems.

==========================================AREAS========================================================================
Authors are invited to submit regular papers (6 pages), short papers (4 pages), and demo papers (2 pages) at the
workshop website mipr.sigappfr.org. Guidelines may also be found there.

Topics of interest include, but are not limited to:

- Affective computing
- Computational semiotics
- Cross-cultural multi-modal recognition techniques
- Distributional semantics
- Event modeling, recognition, and understanding
- Gesture recognition
- Human-machine multimodal interaction
- Integration of multimodal features
- Machine learning for multimodal interaction
- Multimodal analysis of human behavior
- Multimodal datasets development
- Multimodal deception detection
- Multi-modal sensor fusion
- Multi-modality modeling
- Sentiment analysis
- Structured semantic embeddings
- Techniques for description generation of images/videos/other signal-based modalities

To be included in the IEEE Xplore Library, accepted papers must be registered and presented.

========================================ORGANIZATION===================================================================
Chairs:
R. Chbeir, U of Pau, FR and W. Grosky, U Mich-D, US

Program Committee
M. Abouelenien, UMich-DD, US
R. Agrawal, ITL, ERDC, US
A. Aizawa, NII, Japan
Y. Aloimonos, UMD, US
F. Andres, NII, Japan
A. Belz, U of Brighton, UK
R. Bonacin, CTI, BR
J.L. Cardoso, CTI, BR
F. de Franca, UFABC, BR
A. del Bimbo, U Florence, IT
C. Djeraba, U Lille, FR
J. Hirschberg, Columbia U, US
D. Hogg, U of Leeds, UK
A. Jadhav, IBM, US
C. Leung, HK Baptist U, HK
D. Martins, UFABC, BR
R. Mihalcea, U Mich, US
T. Ruas, UMich-D, US
V. Rubin, UWO, CA
O. Salviano, Campinas, BR
S. Satoh, NII, Japan
A. Sheth, Wright St U, US
P Stanchev, Kettering U, US
J. Tekli, American U, LEB
Yi Yu, NII, Japan
Reply all
Reply to author
Forward
0 new messages