Projects

8 views
Skip to first unread message

bhiksha raj

unread,
Sep 9, 2011, 8:33:58 AM9/9/11
to mlsp-fa...@googlegroups.com
Hi All

Thank you for sitting through the lectures patiently yesterday. These
are interesting topics.

I am particularly interested in Mark Reilly's project -- getting a
decent solution would be useful to people worldwide (and also to him
-- he has suffered personal tragedy and the test, which he has been
interested in for a while, suddenly became personally important to
him). I do hope someone decides to work on it.

As another note -- please look over the CFP below. I am looking for
one or two intensely motivated students to work on speech separation
from background sounds, with the intention of making it to this issue.
Note the deadline: its Nov 30th, so obviously this means having to
work very hard and fast. If you're interested in 2 months of
overwork, drop me a line.

I have most of the slides from the presenters; they will go up on the
course website today.

-Bhiksha


---------- Forwarded message ----------
From: jon <j.ba...@dcs.shef.ac.uk>
Date: Fri, Sep 9, 2011 at 7:01 AM
Subject: CFP: Special issue on Speech Separation and Recognition in
Multisource Environments
To:


+++++++++++++++++++++++++++++++++++++++++++
      COMPUTER SPEECH AND LANGUAGE
      http://www.elsevier.com/locate/csl

      Special issue on
      SPEECH SEPARATION AND RECOGNITION IN MULTISOURCE ENVIRONMENTS

      Submission Deadline:  NOVEMBER 30, 2011
+++++++++++++++++++++++++++++++++++++++++++

One of the chief difficulties of building distant-microphone speech
recognition systems for use in everyday applications is that the noise
background is typically `multisource'. A speech recognition system
designed to operate in a family home, for example, must contend with
competing noise from televisions and radios, children playing, vacuum
cleaners, and outdoors noises from open windows. Despite their
complexity, such environments contain structure that can be learnt and
exploited using advanced source separation, machine learning and
speech recognition techniques such as those presented at the 1st
International Workshop on Machine Listening in Multisource
Environments (CHiME 2011).
http://spandh.dcs.shef.ac.uk/projects/chime/workshop/

This special issue solicits papers describing advances in speech
separation and recognition in multisource noise environments,
including theoretical developments, algorithms or systems.

Examples of topics relevant to the special issue include:
• multiple speaker localization, beamforming and source separation,
• hearing inspired approaches to multisource processing,
• background noise tracking and modelling,
• noise-robust speech decoding,
• model combination approaches to robust speech recognition,
• datasets, toolboxes and other resources for multisource speech
separation and recognition.


SUBMISSION INSTRUCTIONS:
Manuscript submissions shall be made through the Elsevier Editorial
System (EES) at
http://ees.elsevier.com/csl/
Once logged in, click on “Submit New Manuscript” then select “Special
Issue: Speech Separation and Recognition in Multisource Environments”
in the “Choose Article Type” dropdown menu.


IMPORTANT DATES:
November 30, 2011: Paper submission
March 30, 2012: First review
May 30, 2012: Revised submission
July 30, 2012: Second review
August 30, 2012: Camera-ready submission


We are looking forward to your submission!


Jon Barker, University of Sheffield, UK
Emmanuel Vincent, INRIA, France

---


--
Bhiksha Raj
Associate Professor
Carnegie Mellon University
Pittsburgh, PA, USA
Tel: 412 268 9826

Reply all
Reply to author
Forward
0 new messages