(Hybrid, Dec. 15-18, Washington DC, USA)
https://www.comstar-tech.org/workshops/2024/workshop_2024_MMAI.html Multimodality is the most general form for information representation and delivery in a real world. Using multimodal data is natural for humans to make accurate perceptions and decisions. Our digital world is actually multimodal, combining various data modalities, such as text, audio, images, videos, animations, drawings, depth, 3D, biometrics, interactive content, etc. Multimodal data analytics algorithms often outperform single modal data analytics in many real-world problems.
Multi-sensor information fusion has also been a topic of great interest in industry nowadays. In particular, such companies working on automotive, drone vision, surveillance or robotics have grown exponentially. They are attempting to automate processes by using a wide variety of control signals from various sources.
With the rapid development of Big Data technology and its remarkable applications in many fields, multimodal AI for Big Data is a timely topic. This workshop aims to generate momentum around this topic of growing interest and to encourage interdisciplinary interaction and collaboration between Natural Language Processing (NLP), computer vision, audio processing, machine learning, robotics, Human-Computer Interaction (HCI), social computing, cybersecurity, cloud computing, Internet of Things (IoT), and geospatial computing communities. It serves as a forum to bring together active researchers and practitioners from academia and industry to share their recent advances in this promising area.
________________________________________
TopicsThis is an open call for papers, which solicits original contributions considering recent findings in theory, methodologies, and applications in the field of multimodal AI and Big Data. The list of topics includes, but is not limited to:
•
Multimodal representations
•
Multimodal data modeling and data fusion
•
Multimodal learning, cross-modal learning
•
Multimodal big data analytics, visualization
•
Multimodal big data infrastructure and management
•
Multimodal scene understanding
•
Multimodal perception and interaction
•
Multimodal benchmark datasets and evaluations
•
Multimodal information tracking, retrieval and identification
•
Multimodal object detection, classification, recognition and segmentation
•
Language, vision, audio, touch, etc.
•
Multimodal AI in robotics (robotic vision, NLP in robotics, Human-Robot Interaction (HRI), etc.)
•
Multimodal AI Safety (explainability, interpretability, trustworthiness, etc.)
•
Multimodal Biometrics
•
Multimodal applications (autonomous driving, cybersecurity, smart cities, intelligent transportation systems, industrial inspection, medical diagnosis, healthcare, social media, arts, etc.)
________________________________________
Important DatesEarly submission and notification dates (Anywhere on Earth):
- Sept. 3, 2024: full papers (8-10 pages including references & appendices)
- Sept. 10, 2024: short papers (5-7 pages including references & appendices)
- Sept. 17, 2024: poster papers (3-4 pages including references)
- Oct. 7, 2024: Notification of paper acceptance
- Oct. 21, 2024: Submission of revisions of conditionally accepted papers/posters for the second round of review
Please visit the
workshop website for more details.
The main conference of IEEE Big Data 2024 will be in person but the MMAI workshop will be hosted both virtually and onsite.
________________________________________
SubmissionPlease directly submit your papers to
IEEE Big Data 2024 paper submission site. Accepted papers will be published in the IEEE Big Data 2024 proceedings.