Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

VISION-LIST digest 28.4

12 views
Skip to first unread message

Vislist Moderator

unread,
May 21, 2009, 9:28:15 PM5/21/09
to
VISION-LIST Digest Wed May 20 08:22:39 PST 2009 Volume 28 : Issue 4

- ***** The Vision List host is VISLIST.COM *****
- Send submissions to submi...@vislist.com
('VLS' MUST APPEAR IN THE SUBJECT or your submission will not be processed)
- Vision List Digest and Archives available at http://WWW.VISLIST.COM
- If you don't have access to a web browser, you can read the
Vislist from the newsgroup comp.ai.vision or send email to
mode...@vislist.com to request mailing list membership
('VLS' MUST APPEAR IN THE SUBJECT or your submission will not be processed)
- Access Vision List Archives via anonymous ftp to ftp://FTP.VISLIST.COM
- Vision List supported by Directed Perception (http://www.DPerception.com)

Today's Topics:

ADMIN: New visserver installed!
JOB : Postdoctoral Positions - UNC - Chapel Hill, NC
JOB : PhD Studentship - Queen Mary - UK
CFP : Color and Reflectance ICV WS - Japan - 16JUN2009
CFP : PUC Special Issue - ACM/Springer Journal - 31MAY2009
CFP : Context in Vision Processing WS - Boston, MA - 15JUL2009
CFP : IEEE 3rd Int Conf Biometrics - Wash DC, USA - 31MAY2009
CFP : Theseus/ImageCLEF WS - Corfu - 15JUL2009
CFP : 2nd Int WS Tracking Humans - Japan - 19JUN2009
CFP : IPCV'09 - Las Vegas, NV - 27May2009
CFP : ICMI-MLMI 2009 Cambridge, MA - deadline extended to 05/29/2009
CFP : 1st ACM US on LS Multimedia - China - 19JUN2009
CFP : 1st Int WS Video Events - China - 6JUL2009

----------------------------------------------------------------------

From: "VISLIST Moderator" <mode...@vislist.com>
Subject: ADMIN: New visserver installed!

Hi all,

The new visserver is in place. You should note a very significant
improvement in web access. Over the next few months, the goal will be to
stabilize and improve the existing Vislist distribution and moderation
functions.

I want to automate and improve the features, services, and timeliness of the
Vislist. The Vislist was founded with the idea of creating a "conversant
forum for the computer vision community." The existing old USENET model and
stale web pages don't well meet this charter goal. Armed with a machine that
has some power, I will be working over this year to update the Vislist
functionality and ease of use and moderation.

As usual, any and all comments and info is invited.

Thanks!

Philip Kahn
moderator
VISLIST.com - comp.ai.vision
mode...@vislist.com
submissions to submi...@vislist.com

------------------------------

MFrom: "fengshi" <fen...@med.unc.edu>
Subject: JOB : Postdoctoral Positions - UNC - Chapel Hill, NC

Several postdoctoral positions are available in IDEA lab
( https://www.med.unc.edu/bric/ideagroup ), UNC-Chapel Hill, NC.

Position 1 (One Position on Breast MRI Analysis): The successful
candidate will be expected to develop novel enhancement segmentation and
classification methods for significantly improving the specificity of
dynamic MRI in detecting and diagnosing breast cancer. Both a
spatiotemporal registration method and an enhancement segmentation
method will be developed. Also, a novel enhancement classification
method for differentiating between benign and malignant enhancements for
both mass and non-mass enhancement cases will be developed. The
successful candidate should have a strong background on Electrical or
Biomedical Engineering, or Computer Science, preferably with emphasis on
image registration, segmentation, and classification. Experiences on
deformable registration, adaptive segmentation (i.e., graph cut), and
non-linear classification are highly desirable.

Position 2 (One Position on Brain Image Segmentation): The successful
candidate should have a strong background on Electrical or Biomedical
Engineering, or Computer Science, preferably with emphasis on image
analysis, or computer vision. Experience on medical image segmentation
and analysis is highly desirable. People with machine learning
background are particularly encouraged to apply. Knowledge on
neuroscience and programming background (good command of LINUX, C and
C++, scripting, and Matlab) are desirable. The research topic will be
the development and validation of tissue segmentation and subcortical
segmentation methods for brain image analysis.

Position 3 (One Position on Brain Image Registration): The successful
candidate should have a strong background on Electrical or Biomedical
Engineering, or Computer Science, preferably with emphasis on image
analysis, or computer vision. Experience on medical image registration
and analysis is highly desirable. People with machine learning
background are particularly encouraged to apply. Knowledge on
neuroscience and programming background (good command of LINUX, C and
C++, scripting, and Matlab) are desirable. The research topic will be
the development and validation of 3D, 4D, and group-wise image
registration methods for brain image analysis.

Position 4 (One Position on Network Analysis): Applications are invited
for a postdoctoral fellowship to develop and apply network analysis
methods on brain MR images, e.g., structural MRI, functional MRI, and
DTI. Candidates should have a doctoral degree in Biomedical Engineering,
Computer Science, Mathematics, or related field; excellent
interpersonal, organizational, and oral and written English
communication skills.

The successful candidates will be part of a diverse group including
radiologists, psychologists, physicists, biostatistician, and computer
scientists, and will build upon the group's previous work on medical
image analysis. If interested, please email resume to Dr. Dinggang Shen
( mailto:dgs...@med.unc.edu ).

------------------------------

From: Tao Xiang <txi...@dcs.qmul.ac.uk>
Subject: JOB : PhD Studentship - Queen Mary - UK

Applications are invited for a PhD Studentship to undertake research
within the context of an UK EPSRC-funded project 'Multi-Object Video
Behaviour Modelling for Abnormality Detection and Differentiation $(B.
(B
The project aims to develop the underpinning capabilities for an
innovative intelligent video analytics system for monitoring object
behaviour captured in vast quantities of surveillance videos and
detecting/predicting any suspicious and abnormal behaviour that could
pose a threat to public safety and security. The successful candidate
will develop models and algorithms for real-time detection and
differentiation of abnormalities in complex video behaviours that
involve multiple objects interacting with each other.

The successful candidate will be based at the Queen Mary Vision
Laboratory ( http://www.dcs.qmul.ac.uk/research/vision/ ), working under
the supervision of Dr. Tao Xiang ( http://www.dcs.qmul.ac.uk/~txiang ) and
also closely with Prof. Shaogang Gong ( http://www.dcs.qmul.ac.uk/~sgg )
in the School of Electronic Engineering and Computer Science, Queen Mary
University of London. The Queen Mary Vision Laboratory is one of the
leading computer vision research laboratories, specialising in video
behaviour analysis and abnormality detection, people detection and
tracking in crowded scenes, human facial expression and body language
modelling, dynamic scene background removal and object categorisation.

The candidates should have a first or upper second honours degree or
equivalent in Computer Science, Electronic Engineering, Mathematics,
Physics, or a related field, and be able to demonstrate strong
problem-solving and analytical skills. Good programming skills are
required, preferably with Matlab and C++. Research experience in image
processing, computer vision, or machine learning is desirable.

The studentship is for 3 years starting from 1 August 2009 or as soon as
possible thereafter. It includes tuition fees and a tax-free stipend in
line with EPSRC recommendations (currently at �14940 per annum for the
2008/09 session), and is available to candidates of all nationalities.

Informal enquires can be made by email to Dr Tao Xiang:
txi...@dcs.qmul.ac.uk

An application form can be obtained at
http://www.qmul.ac.uk/postgraduate/apply/

To apply please email the following documents to Dr. Tao Xiang
( mailto:txi...@dcs.qmul.ac.uk ): a completed application form, a CV
listing all publications, your representative publications in PDF
format, two independent reference letters, and other relevant
documents as requested (see http://www.qmul.ac.uk/postgraduate/apply/ ).
These documents must also be provided in paper form and sent to the
Admissions and Recruitment Office (see the application form for
address).

The closing date for the applications is Wednesday 1st July 2009.
Interviews are expected to take place during w/c 6th July 2009.

------------------------------

From: Joost <jo...@cvc.uab.es>
Subject: CFP : Color and Reflectance ICV WS - Japan - 16JUN2009

title: Color and Reflectance in Imaging and Computer Vision Workshop
location: Kyoto, Japan, in conjunction with ICCV 2009
date: October 4, 2009
website: http://staff.science.uva.nl/~gevers/CRICV09/

We are soliciting original contributions that address a wide range of
theoretical and application issues including:

- Theory, Color science, colorimetry, color spaces, color difference,
complex reflection models, shading modeling, color appearance models,
multi-spectral.
- Sensors and Physics: Spectral appearance models, spectral imaging
systems, spectral sensor design, active illumination methods, spectral
image analysis.
- Object, Scene and Video Recognition: Color invariance, color saliency,
color constancy, color features (salient points), color descriptors,
matching, machine learning, color image processing of video and still
images, color in motion and tracking.
- Image/Video Processing: Pre-processing, filtering, enhancement,
specularity and shadow removal, feature detection, color texture, image
segmentation, feature grouping, image sequence processing, color
compression, spectral color processing, colorization.
- Vision Color perception: color psychophysics, color constancy, color
discrimination, psychophysical studies and human studies of colour
perception, color memory, color cognition, spatial and temporal color
vision.
- Applications: Industrial inspection, color in food, color in human
computer interaction, medical, and biological applications.

------------------------------

From: Zhiwen Yu <zhi...@GMAIL.COM>
Subject: CFP : PUC Special Issue - ACM/Springer Journal - 31MAY2009

Special Issue on Multimodal Systems, Services and Interfaces for
Ubiquitous Computing

ACM/Springer Journal of Personal and Ubiquitous Computing

Guest Editors: Zhiwen Yu, Frode Eika Sandnes, Kenji Mase, Fabio Pianesi

Activity recognition and implicit interaction are central themes in
ubiquitous computing. These environments usually encompass a variety of
modalities (e.g., speech, gesture, and handwriting), collecting rich and
complex information by integrating multiple devices and different kinds
of sensors. In these cases, multimodal recognition achieves robust and
reliable results by leveraging on different recognition mechanisms and
making the best out of the characteristics of each single channel.
Multimodal interaction, in turn, enables natural and implicit
interaction in ubiquitous computing contexts by making available various
and flexible interfaces that adapts to the environment. In the end,
multimodal approaches are key to accomplish all these tasks, by
overcoming single modality limitations and difficulties in recognition
and interaction. Hence multimodal systems, services and interfaces are
crucial ingredients for ubiquitous computing, and have attracted much
interest in both industry and academia over the last decade.

This special issue aims to further scientific research within the field
of multimodal interaction, services and systems for ubiquitous
computing. It will accept original research papers that report latest
results and advances in this area. It will also invite review articles
that focus on the state-of-the-art in multimodal concepts and systems,
highlighting trends and challenges. The papers will be peer reviewed and
will be selected on the basis of their quality and relevance to the
topic of this special issue.

Topics include (but are not limited to):
- Multimodal sensing in smart environments
- Multimodal fusion techniques
- Multimodal activity recognition
- Multimodal mobility understanding
- Multimodal user modeling
- Multimodal content access and adaptation
- Intelligent user interface
- Multimodal support for social interaction
- Virtual and augmented multimodal interfaces
- Distributed and collaborative multimodal interfaces
- Architectures and tools for multimodal application development
- Applications such as smart home, healthcare, and meeting space
- Evaluation of multimodal systems and interfaces

IMPORTANT DATES
Full manuscript due: May 31, 2009
Notification of the first review process: Aug. 15, 2009
Final acceptance notification: Oct. 20, 2009
Final manuscript due: Oct. 31, 2009
Publication date: Spring 2010 (Tentative)

PAPER SUBMISSION
Submissions should be prepared according to the author instructions
available at the journal homepage,
http://www.springer.com/computer/user+interfaces/journal/779 .
Manuscripts must be submitted in the form of PDF file to the
corresponding editor
Zhiwen Yu ( mailto:zhi...@gmail.com ). Information about the manuscript
(title, full list of authors, corresponding author's contact, abstract,
and keywords) must be included in the submission email.

GUEST EDITORS
Zhiwen Yu, Northwestern Polytechnical University, P. R. China, Email:
mailto:zhiw...@nwpu.edu.cn
Frode Eika Sandnes, Oslo University College, Norway, Email:
mailto:Frode-Eik...@iu.hio.no
Kenji Mase, Nagoya University, Japan, Email: mailto:ma...@nagoya-u.jp
Fabio Pianesi, FBK-irst, Italy, Email: mailto:pia...@fbk.eu

------------------------------

From: A.Ni...@EWI.UTWENTE.NL
Subject: CFP : Context in Vision Processing WS - Boston, MA - 15JUL2009

Call for Papers
Workshop on Use of Context in Vision Processing (UCVP)

http://hmi.ewi.utwente.nl/ucvp09

in conjunction with
Eleventh International Conference on Multimodal Interfaces and Workshop
on Machine Learning for Multi-modal Interaction (ICMI-MLMI 2009)

http://icmi2009.acm.org/
Boston, USA, November 2-6, 2009

Background. The Workshop on Use of Context in Vision Processing (UCVP)
offers a timely opportunity for the exchange of recent work on employing
contextual information in problems of Computer Vision. Recent efforts in
defining ambient intelligence applications based on user-centric
concepts, the advent of technology in different sensing modalities, as
well as the expanding interest in multi-modal information fusion,
situation-aware and dynamic vision processing algorithms have created a
common motivation across different research disciplines to utilize
context as a key enabler of application-oriented vision. Improved
robustness, efficient use of sensing and computing resources, dynamic
task assignment to different operating modules, as well as adaptation to
event and user behavior models are among the benefits a vision
processing system can gain through the utilization of contextual
information.

Aims and scope. UCVP aims to address the opportunities in incorporating
contextual information in algorithm design for single or multi-camera
vision systems, as well as systems in which vision is complemented with
other sensing modalities, such as audio, motion, proximity, occupancy,
and others. The objective of the workshop is to gather high-quality
contributions describing leading-edge research in the use of context in
vision processing. The workshop further aims to stimulate interaction
among the participants through a panel discussion.

Topics of interest to the workshop include:
* Sources of context (multi-camera networks, multi-modal sensing
systems, long-term observation, behavior models, spatial or temporal
relationships of objects and events, interaction of user with objects,
internet resources as knowledge-base for context extraction)
* User-centric context (demographic information, activity, user's
emotional state, stated preferences, explicit and implicit interfaces,
interaction between users)
* Uses of context (context-driven event interpretation, active vision,
multi-modal activation, service provision and switching based on
context, response and interaction with user, detection of abnormal
behavior, active sensing, task assignment to different sensing modules,
guided vision based on high-level reasoning, user behavior modeling,
applications in smart environments, human-computer interfaces)

The workshop aims to encourage collaboration between researchers in
different areas of computer vision and related disciplines. In addition,
by introducing topics of emerging applications in smart environments,
multi-camera networks, and multi-modal sensing as sources of context in
vision, the workshop aims to extend the notion of context-based vision
processing to include high-level and application-driven information
extraction and fusion.

Paper submission. The workshop solicits original and unpublished papers
that address a wide range of issues concerning the use of context in
vision processing. Authors should submit papers not exceeding six (6)
pages in total in ACM format
( http://www.acm.org/sigs/pubs/proceed/template.html ). Submissions must
be sent in PDF to the following email address: mailto:uc...@cs.utwente.nl .

Accepted papers will be presented at the workshop and will appear in the
ACM Digital Libraries. A hardcopy proceedings will be available during
the workshop. At least one author of each paper must register and attend
the workshop to present the paper.

Important dates.
Paper submission: July 15, 2009
Author notification: September 1, 2009
Camera-ready due: September 25, 2009
Workshop: November 5, 2009

Registration. Please note that registration is needed in order to
include an accepted paper to the proceedings. Please refer to the main
ICMI 2009 website for more details.

Organizing team.
Hamid Aghajan (Stanford University, USA)
Ralph Braspenning (Philips Research, The Netherlands)
Yuri Ivanov (MERL, USA)
Louis-Philippe Morency (USC, USA)
Anton Nijholt (University of Twente, The Netherlands)
Maja Pantic (Imperial College, London UK; University of Twente, The
Netherlands)
Ming-Hsuan Yang (Univ. of California Merced, USA)

Program committee
Stan Birchfield, Clemson University, USA
Yang Cai, CMU, USA
Tanzeem Choudhury, Dartmouth College, USA
Bill Christmas, University of Surrey, UK
Maurice Chu, PARC, Palo Alto, USA
David Demirdjian, MIT, USA
Abhinav Gupta, University of Maryland, USA
Ronald Poppe, TU Delft, The Netherlands
Neil Robinson, Heriot-Watt University, UK
Stan Sclaroff, Boston University, USA
Rainer Stiefelhagen, University of Karlsruhe, Germany
YingLi Tian, CCNY, New York
Fernando de la Torre, CMU, USA

------------------------------

From: Kevin W Bowyer <k...@cse.nd.edu>
Subject: CFP : IEEE 3rd Int Conf Biometrics - Wash DC, USA - 31MAY2009

Call for Papers

Biometrics Theory, Applications and Systems (BTAS 09)

The IEEE Third International Conference on Biometrics: Theory,
Applications and Systems (BTAS 09) is the premier research meeting
focused on biometrics. Its broad scope includes advances in
fundamental pattern recognition techniques relevant to biometrics, new
algorithms and / or technologies for biometrics, analysis of specific
applications, and analysis of the social impacts of biometrics
technology. Areas of coverage include biometrics based on voice,
fingerprint, iris, face, handwriting, gait and other modalities, as
well as multi-modal biometrics and new biometrics based on novel
sensing technologies. Submissions will be rigorously reviewed, and
should clearly make the case for a documented improvement over the
existing state of the art. Experimental results for contributions in
established areas such as voice, face, iris, fingerprint, and gait are
encouraged to use the largest and most challenging existing publicly
available datasets. Papers examining the usability of and social
impact of biometrics technology are encouraged.

Paper submission: May 31.
Decisions to authors: July 17.
Final versions: August 15
BTAS 09: September 28--30, 2009.

Additional information is available on the conference web page:
http://www.cse.nd.edu/BTAS_09

------------------------------

From: henning...@sim.hcuge.ch
Subject: CFP : Theseus/ImageCLEF WS - Corfu - 15JUL2009

Call for Papers: Theseus/ImageCLEF workshop on visual information
retrieval evaluation

History:
The THESEUS/ImageCLEF workshop follows the tradition of preceeding
QUAERO and MUSCLE Workshops between 2005 and 2008. The aim of this
workshop is similar to the aim of its predecessors in giving better
perspective to participants of ImageCLEF and allow for discussion of
topics related to visual information retrieval. Additionally, a special
focus is set on information retrieval supported by structured knowledge
e.g. by using ontologies.

Important Dates:
15.07.2009 $(H P (Baper Submission Deadline (Extended Abstract)
15.08.2009 $(H A (Buthor Notification
01.09.2009 $(H F (Binal Paper Submission (Camera Ready)
29.09.2009 $(H T (Bheseus/ImageCLEF Workshop in Corfu
Registration

Registration for this workshop is free.

More information can be found at: http://www.imageclef.org/2009/preCLEF

------------------------------

From: lop...@gmail.com
Subject: CFP : 2nd Int WS Tracking Humans - Japan - 19JUN2009

THEMIS2009
2nd IEEE Int. Workshop on Tracking Humans for the
Evaluation of their Motion in Image Sequences
In conjunction with ICCV2009 ( http://www.iccv2009.org/ )
October 3rdth, 2009
Kyoto, Japan
http://iselab.cvc.uab.es/themis2009

During the past decades, important efforts in computer vision research
have been focused on the description of human movements in image
sequences. Broadly speaking, the main goal was the estimation of
quantitative parameters describing where human motion is
detected. Nowadays, the focus is on the analysis of image sequences by
applying image and scene understanding techniques to that detected
human motion. That is, the true challenge is the generation of
qualitative descriptions about the meaning of motion, therefore
understanding not only where, but also why a human behaviour is being
observed. These goals have become a key task in many computer vision
applications, such as image and scene understanding; video indexing
and retrieval; video surveillance and advanced human-computer
interaction.

The Second International Workshop on Tracking Humans for the
Evaluation of their Motion in Image Sequences (THEMIS2009) will focus
on the understanding of human behaviours in image sequences based on
cmputer vision. THEMIS2009 is interested on human motion understanding
techniques for surveillance but also for sports, news, documentaries
and movie footage.

THEMIS2009 will aim at promoting interaction and collaboration among
researchers specialising in these related fields:
- High-level behaviour recognition and understanding;
- Use of ontologies on human motion for video footage;
- Browsing, indexing and retrieval of human behaviours in video;
- Region categorization in human-populated scenarios;
- Automatic annotation of human motion in video content;
- Natural-language description of human behaviours;
- Cognitive surveillance and ambient intelligence;
- Learning models for behaviour analysis (body/face);
- Recognizing behaviours in multimedia archives.
- Human behaviour synthesis: articulated models and animation;

See web site for complete and updated information.

SUBMISSION DATES
Submission of full papers: June 19th, 2009
Notification of acceptance: July 24th, 2009
Camera Ready: August 14th, 2009
2nd THEMIS Workshop 2009: October 3rd, 2009

Please submit your paper using the EasyChair system:
https://www.easychair.org/login.cgi?conf=themis2009

Double submissions with ICCV 2009 are accepted, in accordance with the
policies of the conference.

Accepted papers will be published in the main DVD proceedings book of
ICCV in a journal with JCR impact factor.

More details on paper submission rules and guidelines are available on
the workshop website.

WORKSHOP COMMITTEE
Jordi Gonz�lez, Universitat Aut�noma de Barcelona, Spain
Thomas B. Moeslund, University of Aalborg, Denmark
Liang Wang, University of Melbourne, Australia

Sponsored by EU Projects IST-027110 HERMES, IST-045547 VIDI-Video.

------------------------------

From: "A. M. G. Solo" <amg...@yahoo.com>
Subject: CFP : IPCV'09 - Las Vegas, NV - 27May2009

Call For Papers - Deadline: May 27, 2009

WORLDCOMP'09
The 2009 World Congress in Computer Science,
Computer Engineering, and Applied Computing

Date and Location: July 13-16, 2009, Las Vegas, USA
http://www.world-academy-of-science.org/

Indexing: Inspec / IET / The Institute for Engineering and Technology,
DBLP / CS Bibliography, and others.

You are invited to submit a paper (see instructions below.) This
announcement is ONLY for those who missed the opportunity to submit
their papers in response to earlier announcements (authors who have
already been notified that their papers have been accepted/not-accepted
should IGNORE this announcement.)

WORLDCOMP'09 is composed of a number of tracks (joint-conferences,
tutorials, and workshops); all will be held simultaneously, same
location and dates: July 13-16, 2009. For the complete list of joint
conferences, see: http://www.world-academy-of-science.org/

This is a Call For Papers for publication in the Final Edition of the
conference proceedings. All papers submitted in response to this
announcement will be evaluated for publication in the Final Edition of
the proceedings which will go to press soon after the conference
(publication date: late August 2009).

IMPORTANT DATES:
May 27, 2009: Submission of full papers for evaluation (~7 pages)
June 10, 2009: Notification of acceptance
June 24, 2009: Registration
July 13-16, 2009: WORLDCOMP'09 Congress (all joint-conferences)
July 24, 2009: Camera-Ready Papers Due for publication in the
Final Edition of the proceedings.

SUBMISSION OF PAPERS:

Prospective authors are invited to submit/upload their papers in pdf or
MS doc (about 7 pages, single spaced with the font size of 10 or 11) to
the following web site: http://worldcomp.cviog.uga.edu/

All reasonable typesetting formats are acceptable. Authors of accepted
papers will later be asked to follow a particular typing instructions to
prepare their final paper for publication.

Papers must not have been previously published or currently submitted
for publication elsewhere. The first page of the paper should include:
title of the paper, name, affiliation, postal address, and email address
for each author. Accepted papers will be published in the final edition
of the respective proceedings/books.

All submissions will be evaluated for originality, significance,
clarity, and soundness. Each paper will be refereed by two researchers
in the topical area. All proceedings of WORLDCOMP will be published and
indexed in: Inspec / IET / The Institute for Engineering and Technology,
DBLP / CS Bibliography, and others.

LIST OF CONFERENCES:
BIOCOMP'09: International Conf. on Bioinformatics &
Computational Biology
CDES'09: International Conf. on Computer Design
CGVR'09: International Conf. on Computer Graphics & Virtual
Reality
CSC'09: International Conf. on Scientific Computing
DMIN'09: International Conf. on Data Mining
EEE'09: International Conf. on e-Learning, e-Business,
Enterprise Information Systems, & e-Government
ERSA'09: International Conf. on Engineering of Reconfigurable
Systems and Algorithms
ESA'09: International Conf. on Embedded Systems & Applications
FCS'09: International Conf. on Foundations of Computer Science
FECS'09: International Conf. on Frontiers in Education: Computer
Science & Computer Engineering
GCA'09: International Conf. on Grid Computing & Applications
GEM'09: International Conf. on Genetic & Evolutionary Methods
ICAI'09: International Conf. on Artificial Intelligence
ICOMP'09: International Conf. on Internet Computing
ICWN'09: International Conf. on Wireless Networks
IKE'09: International Conf. on Information & Knowledge
Engineering
IPCV'09: International Conf. on Image Processing, Computer
Vision, & Pattern Recognition
MSV'09: International Conf. on Modeling, Simulation &
Visualization Methods
PDPTA'09: International Conf. on Parallel and Distributed
Processing Techniques & Applications
SAM'09: International Conf. on Security and Management
SERP'09: International Conf. on Software Engineering Research
and Practice
SWWS'09: International Conf. on Semantic Web and Web Services


PLANNED TUTORIALS:

See the following web site for a partial list:
http://www.world-academy-of-science.org/worldcomp09/ws/tutorials

KEYNOTE LECTURES:
See the following web site for a partial list:
http://www.world-academy-of-science.org/worldcomp09/ws/keynotes

LOCATION OF CONFERENCE:
WORLDCOMP will be held in the Monte Carlo hotel, Las Vegas, USA (with
any overflows at other near-by hotels). This is a mega hotel with
excellent conference facilities and over 3,000 rooms. It is minutes
from the airport with 24-hour shuttle service to and from the airport.
This hotel has many recreational attractions, including: spa, pools,
sunning decks, Easy River, wave pool, lighted tennis courts, nightly
shows, a number of restaurants, ... The negotiated room rate for
conference attendees is very reasonable. The hotel is within walking
distance from most other attractions.

SPONSORS:
Academic/Technical Co-Sponsors: (a partial list)

-> United States Military Academy, Network Science Center
-> Biomedical Cybernetics Lab., HST of Harvard University and
MIT, USA
-> Argonne's Leadership Computing Facility of Argonne National
Laboratory, USA
-> Functional Genomics Laboratory, University of Illinois at
Urbana-Champaign, USA
-> Minnesota Supercomputing Institute, University of Minnesota, USA
-> Intelligent Data Exploration and Analysis Laboratory, University
of Texas at Austin, Austin, Texas, USA
-> Harvard Statistics Department Genomics & Bioinformatics
Laboratory, Harvard University, USA
-> Texas Advanced Computing Center, The University of Texas at
Austin, Texas, USA
-> Center for the Bioinformatics and Computational Genomics,
Georgia Institute of Technology, Atlanta, Georgia, USA
-> Bioinformatics & Computational Biology Program, George Mason
University, Virginia, USA
-> Institute of Discrete Mathematics and Geometry, Vienna
University of Technology, Austria
-> BioMedical Informatics & Bio-Imaging Laboratory, Georgia
Institute of Technology and Emory University, Georgia, USA
-> Knowledge Management & Intelligent System Center (KMIS) of
University of Siegen, Germany
-> National Institute for Health Research, UK
-> Hawkeye Radiology Informatics, Department of Radiology,
College of Medicine, University of Iowa, Iowa, USA
-> Institute for Informatics Problems of the Russian Academy of
Sciences, Moscow, Russia.
-> Medical Image HPC & Informatics Lab (MiHi Lab), University
of Iowa, Iowa, USA
-> SECLAB
University of Naples Federico II, University of Naples
Parthenope, and the Second University of Naples, Italy
-> The University of North Dakota, Grand Forks, North Dakota, USA
-> Intelligent Cyberspace Engineeing Lab., ICEL, Texas A&M
University (Com./Texas), USA
-> International Society of Intelligent Biological Medicine
-> World Academy of Biomedical Sciences and Technologies

Other Co-Sponsors:
-> European Commission
-> Super Micro Computer, Inc., San Jose, California, USA
-> High Performance Computing for Nanotechnology (HPCNano)
-> HoIP - Health without Boundaries
-> The International Council on Medical and Care Compunetics
-> The UK Department for Business, Enterprise & Regulatory Reform
-> VMW Solutions Ltd.
-> Scientific Technologies Corporation
-> Hodges' Health
-> Bentham Science Publishers
-> GridToday

------------------------------

From: Yang Liu <ya...@HLT.UTDALLAS.EDU>
Subject: CFP : ICMI-MLMI 2009 Cambridge, MA - deadline extended to 05/29/2009

ICMI-MLMI 2009

Cambridge, MA, USA,
November 2-6 2009
sponsored by ACM SIGCHI

The Eleventh International Conference on Multimodal Interfaces and The
Sixth Workshop on Machine Learning for Multimodal Interaction will
jointly take place in the Boston area from November 2-6, 2009. The main
aim of ICMI-MLMI 2009 is to further scientific research within the broad
field of multimodal interaction, methods and systems. This joint
conference will focus on major trends and challenges in this area, and
work to identify a roadmap for future research and commercial success.
ICMI-MLMI 2009 will feature a single-track main conference with keynote
speakers, panel discussions, technical paper presentations, poster
sessions, and demonstrations of state of the art multimodal systems and
concepts. It will be followed by workshops.

Venue:

The conference will take place at the MIT Media Lab, widely known for
its innovative spirit. Organized in Cambridge, Massachusetts, USA,
ICMI-MLMI 2009 provides an excellent setting for brainstorming and
sharing the latest advances in multimodal interaction, systems, and
methods in a city known as one of the top historical, technological, and
scientific centers of the US.

Important dates:

Paper submission May 29, 2009
Author notification July 20, 2009
Camera-ready due August 20, 2009
Conference Nov 2-4, 2009
Workshops Nov 5-6, 2009

Topics of interest:

Multimodal and multimedia processing:

Algorithms for multimodal fusion and multimedia fission
Multimodal output generation and presentation planning
Multimodal discourse and dialogue modeling
Generating non-verbal behaviors for embodied conversational agents
Machine learning methods for multimodal processing

Multimodal input and output interfaces:

Gaze and vision-based interfaces
Speech and conversational interfaces
Pen-based interfaces
Haptic interfaces
Interfaces to virtual environments or augmented reality
Biometric interfaces combining multiple modalities
Adaptive multimodal interfaces

Multimodal applications:

Mobile interfaces
Meeting analysis and intelligent meeting spaces
Interfaces to media content and entertainment
Human-robot interfaces and human-robot interaction
Vehicular applications and navigational aids
Computer-mediated human to human communication
Interfaces for intelligent environments and smart living spaces
Universal access and assistive computing
Multimodal indexing, structuring and summarization

Human interaction analysis and modeling:

Modeling and analysis of multimodal human-human communication
Audio-visual perception of human interaction
Analysis and modeling of verbal and non-verbal interaction
Cognitive modeling of users of interactive systems

Multimodal data, evaluation, and standards:

Evaluation techniques and methodologies for multimodal interfaces
Authoring techniques for multimodal interfaces
Annotation and browsing of multimodal data
Architectures and standards for multimodal interfaces

Paper Submission:

There are two different submission categories: regular paper and short
paper.
The page limit is 8 pages for regular papers and 4 pages for short papers.
The presentation style (oral or poster) will be decided by the committee
based on suitability and schedule.

Demo Submission:

Proposals for demonstrations shall be submitted to demo chairs
electronically.
A two page description with photographs of the demonstration is required.

Doctoral Spotlight:

Funds are expected from NSF to support participation of doctoral candidates
at ICMI-MLMI 2009, and a spotlight session is planned to showcase ongoing
thesis work. Students interested in travel support can submit a short or
long paper as specified above.


Organizing committee

General Co-Chairs:
James L. Crowley, INRIA, Grenoble, France
Yuri A. Ivanov, MERL, Cambridge, USA
Christopher R. Wren, Google, Cambridge, USA

Program Co-Chairs:
Daniel Gatica-Perez, Idiap Research Institute, Martigny, Switzerland
Michael Johnston, AT&T Labs Research, Florham Park, USA
Rainer Stiefelhagen, University of Karlsruhe, Germany

Treasurer:
Janet McAndlees, MERL, Cambridge, USA

Sponsorship:
Herve Bourlard, Idiap Research Institute, Martigny, Switzerland

Student Chair:
Rana el Kaliouby, MIT Media Lab, Cambridge, USA

Local Arrangements:
Clifton Forlines, MERL, Cambridge, USA
Deb Roy, MIT Media Lab, Cambridge, USA
Thanks to Cole Krumbholz, MITRE, Bedford, USA

Publicity:
Sonya Allin , University of Toronto, Canada
Yang Liu, University of Texas at Dallas, USA

Publications:
Louis-Philippe Morency, University of South California, USA

Workshops:
Xilin Chen, Chinese Academy of Sciences, China
Steve Renals, University of Edinburgh, Scotland

Demos:
Denis Lalanne, University of Fribourg, Switzerland
Enrique Vidal, Polytechnic University of Valencia, Spain

Posters:
Kenji Mase, Nagoya University, Japan

Volunteer Chair:
Matthew Berlin, MIT Media Lab, Cambridge, USA

------------------------------

From: Rong Yan <ya...@US.IBM.COM>
Subject: CFP : 1st ACM US on LS Multimedia - China - 19JUN2009

CALL FOR PAPERS

The 1st ACM Workshop on Large-Scale Multimedia Retrieval and Mining
(LS-MMRM)
in conjunction with 2009 ACM International Conference on Multimedia
(ACM-MM)
Beijing Hotel, Beijing, China
October 23, 2009

Recent years have witnessed an explosive growth of multimedia content
driven by the wide availability of massive storage devices,
high-resolution video cameras and fast networks. Stimulated by recent
progress in scalable machine learning, feature indexing and multi-modal
analysis techniques, researchers are becoming increasingly interested in
exploring challenges and new opportunities for developing much larger
scale approaches for multimedia retrieval and mining. Many of these
computationally-intensive ideas are now becoming practical because of
the broader availability of high-speed clusters and the advent of cloud
computing.

This workshop aims to bring together researchers and industrial
practitioners interested in large-scale multimedia retrieval and mining.
The workshop will provide a venue for the participants to explore a
variety of aspects and applications on how advanced multimedia analysis
techniques can be leveraged to address the challenges in large-scale
data collections. We solicit high-quality original papers for the
technical sessions that address important issues in large-scale
multimedia analysis, and which demonstrate that the proposed approaches
scale to sufficiently large multimedia collections (e.g., hundreds of
thousands of images, or hundreds of hours of video or audio content).
The list of possible topics includes, but is not limited to:

- Indexing and retrieval for large multimedia collections (including
images, video, audio and other multi-modal systems)
- Near-duplicate detection over large data sets
- Large-scale video event and temporal analysis over diverse sources
- Web-scale social-network and content-network analysis
- Automatic machine tagging, semantic annotation and object recognition on
massive multimedia collections
- Collaborative image and video annotation for distributed users
- Interfaces for exploring, browsing and visualizing large multimedia
collections
- Scalable and distributed machine learning and data mining methods for
multimedia data
- Scalable and distributed multimedia content analysis systems
- Construction of standardized large-scale multimedia collections

Submissions for this workshop are required to follow the same format as
regular ACM Multimedia papers with no more than 8 pages. All submitted
papers will go through a peer review process. We plan to invite extended
versions of selected papers for a special issue of a top-tier multimedia
journal.

Additional information is available at the ACM Multimedia website:
http://www.acmmm09.org/workshop/MRM2009/default.aspx

Important Dates:
Jun. 19, 2009 (Friday): Submission Deadline
Jul. 17, 2009 (Friday): Acceptance Notification
Jul. 24, 2009 (Friday): Camera-Ready papers

Workshop Chairs:
Rong Yan (IBM TJ Watson Research)
Qi Tian (Microsoft Research Asia)
John R. Smith (IBM TJ Watson Research)
Rahul Sukthankar (Intel Research and Carnegie Mellon)

------------------------------

From: Jianguo Zhang <j.z...@ecit.qub.ac.uk>
Subject: CFP : 1st Int WS Video Events - China - 6JUL2009

The 1st International Workshop on Video Event Categorization,
Tagging and Retrieval (VECTaR 2009), 24-September-2009

We are pleased to announce the call for paper for VECTaR 2009 (In
Conjunction with ACCV 2009)

The 1st International Workshop on Video Event Categorization, Tagging
and Retrieval

Details are below.
Dates: 24 September 2009
Location: Xi'an, China
Website: http://www.cs.qub.ac.uk/~J.Zhang/VECTaR2009.htm

First Call for Papers

One of the remarkable capabilities of human visual perception system
is to interpret and recognize thousands of events in videos, despite
high level of video object clutters, different types of scene context,
variability of motion scales, appearance changes, occlusions and
object interactions. As an ultimate goal of computer vision system,
the interpretation and recognition of visual events is one of the most
challenging problems and has increasingly become very popular for
decades. This task remains exceedingly difficult because of several
reasons: 1) there still remain large ambiguities in the definition of
different levels of events. 2) A computer model should be capable of
capturing the meaningful structure for a specific event. At the same
time, the representation (or recognition process) must be robust under
challenging video conditions. 3) A computer model should be able to
understand the context of video scenes to have meaningful
interpretation of a video event. Despite those difficulties, in recent
years, steady progress has been made towards better models for video
event categorisation and recognition, e.g., from modelling events with
bag of spatial temporal features to discovering event context, from
detecting events using a single camera to inferring events through a
distributed camera network, and from low-level event feature
extraction and description to high-level semantic event classification
and recognition.

The goal of this workshop is to provide a forum for recent research
advances in the area of video event categorisation, tagging and
retrieval. The workshop seeks original high-quality submissions from
leading researchers and practitioners in academia as well as industry,
dealing with theories, applications and databases of visual event
recognition. Topics of interest include, but are not limited to:

Motion interpretation and grouping
Human Action representation and recognition
Abnormal event detection
Contextual event inference
Event recognition among a distributed camera network
Multimodal event recognition
Spatial temporal features for event categorisation
Hierarchical event recognition
Probabilistic graph models for event reasoning
Machine learning for event recognition
Global/local event descriptors
Metadata construction for event recognition
Bottom up and top down approaches for event recognition
Event-based video segmentation and summarization
Video event database gathering and annotation
Efficient indexing and concepts modelling for video event retrieval
Semantic-based video event retrieval
Online video event tagging
Evaluation methodologies for event-based systems
Event-based applications (security, sports, news, etc.)

Important Dates

Submission deadline: July 06, 2009
Notification of acceptance: July 31, 2009
Camera-ready papers: August 12, 2009
Workshop: September 24, 2009


WORKSHOP CO-CHAIRS
Dr. Jianguo Zhang, Queen's University Belfast, UK
Dr. Ling Shao, Philips Research Laboratories, The Netherlands
Dr. Lei Zhang, Microsoft Research Asia, China
Prof. Graeme A. Jones, Kingston University, UK

PAPER SUBMISSION
When submitting manuscripts to this workshop, the authors acknowledge
that the manuscripts or papers substantially similar in content have
NOT been submitted to another conference, workshop, or journal. The
format of the paper is the same as the ACCV main conference
paper. Please follow the instructions on the website
http://www.accv2009.org/ . For the paper submission, please follow
the Submission Website
( https://cmt.research.microsoft.com/VECTAR2009/ ).

REVIEW
Each submission will be reviewed by at least three reviewers from
program committee members and external reviewers for originality,
significance, clarity, soundness, relevance and technical
contents. Accepted papers will be published together with the
proceedings of ACCV 2009 in electronic format by Springer. High-quality
papers will be invited to submit in an extended form to an edited
book or a special issue of a top computer vision journal
(e.g. CVIU) after the conference.

PROGRAM COMMITTEE (alphabetical order)
Rama Chellappa, University of Maryland, USA
Roy Davies, Royal Holloway, University of London, UK
James W. Davis, Ohio State University, USA
Ling-Yu Duan, Peking University, China
Tim Ellis, Kingston University, UK
James Ferryman, University of Reading, UK
GianLuca Foresti, University of Udine, Italy
Shaogang Gong, Queen Mary University London, UK
kaiqi Hang, Chinese Academy of Sciences, China
Winston Hsu, National Taiwan University
Yu-Gang Jiang, City University of Hong Kong, China
Graeme A. Jones, Kingston University, UK
Ivan Laptev, INRIA, France
Jianmin Li, Tsinghua University, China
Xuelong Li, Birkbeck College, University of London, UK
Zhu Li, Hong Kong Polytechnic University, China
Marcin Marszalek, Unviersity of Oxford, UK
Tao Mei, Microsoft Research Asia
Paul Miller, Queens University Belfast, UK
Ram Nevatia, University of Southern California, USA
Yanwei Pang, Tianjin University, China
Federico Pernici, Universit?di Firenze, Italy
Carlo Regazzoni, University of Genoa, Italy
Shin'ichi Satoh, National Institute of Informatics, Japan
Dan Schonfeld, University of Illinois at Chicago, USA
Ling Shao, Philips Research Laboratories, The Netherlands
Yan Song, University of Science and Technology of China
Peter Sturm, INRIA, France
Dacheng Tao, Nanyang Technological University, Singapore
Xin-Jing Wang, Microsoft Research Asia
Tao Xiang, Queen Mary University London, UK
Dong Xu, Nanyang Technological University, Singapore
Li-Qun Xu, BT exact UK
Hongbin Zha, Peking University, Beijing China
Jianguo Zhang, Queen's University Belfast, UK
Lei Zhang, Microsoft Research Asia


Contacts
Dr. Jianguo Zhang, mailto:Jiangu...@qub.ac.uk
Dr. Ling Shao, mailto:l.s...@philips.com
Dr. Lei Zhang, mailto:leiz...@microsoft.com

------------------------------

End of VISION-LIST digest 28.4
************************

0 new messages