plover for languages other than English, and future of steno

382 views
Skip to first unread message

Danishkadah

unread,
Jan 21, 2012, 2:56:31 AM1/21/12
to plove...@googlegroups.com
Hi

I got busy in this and that, and couldn't go further from the first lesson with my sidewinder x4.
Now I am back and hoping to learn some more, here I read someone develop some game for learning plover I must try that

besides my understanding is that user can make their own dictionary and shorthand right, in this case all those language that share same alphabets as of English can get benefit easily from plover. However not knowing the theory of steno it may be difficult to develop a good dictionary, is there any basic guideline how to develop short hands

Committed: KMITD
Leverage: LEFRJ  

in our region I think there are at least 3 languages that share same Alphabets as of English, so probably they can get benefit from plover.

we are still thinking to start a training project for real time captioning with plover, but length of training might be the biggest hurdle making people interested in learning and mastering it. Besides at least for English speech to text software are becoming more matured and need very little training so may be steno will become obsolete in next 5-10 years. So those who are pursing steno degree now when complete their degree after 4-5 years will not have any big market demand for their skill. What you think?

Thank you,
 
Muhammad Akram
Founder & Chairman
DANISHKADAH
www.danishkadah.org.pk

Member International Federation of Hard of Hearing Young People (IFHOHYP)
Associate with Asia-Pacific Development Center on Disability (APCD)

Group Leader  ALDA - Asia-Pacific
Chairperson International Committee of ALDA Inc. USA
Asst. Director Deaf Friends International (DFI)

leeo

unread,
Feb 2, 2012, 4:44:44 PM2/2/12
to Plover
On Jan 20, 11:56 pm, Danishkadah <danishka...@gmail.com> wrote:
> Hi
>
> ... at least for
> English speech to text software are becoming more matured and need very
> little training so may be steno will become obsolete in next 5-10 years. So
> those who are pursing steno degree now when complete their degree after 4-5
> years will not have any big market demand for their skill. What you think?
>
> Thank you,
>
> Muhammad Akram

The pace of hardware development has always astonished me; the pace of
software development has always disappointed me. True Speech to Text
is always half a generation away -- and, astonishingly, it works
better with Japanese than English. This is because Japanese is based
on an extended alphabet of syllables while English is based on an
extended alphabet of blended sounds.

Even with a working hand-held speetch to text machine, stenography
still has a use as the fastest text-based computer input in the case
where you don't want to interrupt your neighbor, such as in an office
setting. I see legions of "thumb jockies" texting on the bus ride
home, and wonder how much more productive they would be using a steno
system. --Lee

Michael Roberts

unread,
Feb 2, 2012, 4:58:14 PM2/2/12
to plove...@googlegroups.com
On 2/2/2012 4:44 PM, leeo wrote:
> Even with a working hand-held speetch to text machine, stenography
> still has a use as the fastest text-based computer input in the case
> where you don't want to interrupt your neighbor, such as in an office
> setting.
Steno can be *faster* than voice, actually. Voice goes at about 180 wpm
- steno can go at 240 wpm. Ergonomically speaking, I can't imagine
using my voice for eight or ten hours a day, either.

Japanese is easier to recognize because it has fewer phonemes, by the
way. Only the five basic vowels and fewer consonants than English, and
*far* fewer blended phonemes. They have an awful lot of homophones,
though, so I'm sure you'd need some pretty powerful statistics to
disambiguate.


Mirabai Knight

unread,
Feb 2, 2012, 6:37:26 PM2/2/12
to plove...@googlegroups.com
I think the main problem in your case, Muhammad, is that voice
recognition technology is very specifically tuned to a standard
American accent, and expanding its corpus to other accents is a really
non-trivial problem.

Keep in mind that 90% accuracy means one word in every ten is an error
-- that's one to two errors per sentence. 95% accuracy is one wrong
word in every 15. 99% accuracy is one error in every 100 words, or
about one per paragraph. As a CART provider, I aim for a standard of
99.9% accuracy, which is about one error per every 1,000 words, or
every four double spaced pages.

Other notes on voice writing:

The supposedly short training period is voice writing's major selling
point over steno (aside from the cost of equipment), but from what I
can tell, it's not actually true. You can train someone to a moderate
degree of accuracy very quickly; all they have to do is speak into the
microphone slowly and clearly, and it'll get a fair amount of words
correct. For dictation or offline transcription, this can work well,
assuming they have the stamina to speak consistently for long periods
of time, because they can stop, go back, and correct errors as they
make them. But actual live realtime respeaking at CART levels of
accuracy (ideally over 99% correct) is much harder.

* Short words are more difficult for the speech engine to recognize
than multisyllabic words are, and are more likely to be ignored or
mistranscribed.

* If the voice captioner does mostly direct-echo respeaking, meaning
that they don't pronounce words in nonstandard ways, they have to
repeat multisyllabic words using the same number of syllables as in
the original audio; if they try to "brief" long words by assigning a
voice macro that lets them say the word in one syllable, they run up
against the software's difficulty in dealing with monosyllabic words
that I mentioned above.

* Because they're mostly saying words in the same amount of time as
they were originally spoken (unlike in steno, where a multisyllabic
word can be represented by a single split-second stroke), they don't
have much "reserve speed" to make corrections if the audio is
mistranscribed. They also have to verbally insert punctuation and use
macros to differentiate between homonyms, which also takes time and
can be fatiguing.

* Compensating for the lack of reserve speed by speaking the words
more quickly than they were originally spoken can also be problematic,
because the software is better able to transcribe words spoken with
clearly delineated spaces between them, as opposed to words that are
all run together.

* This means that if the software makes a mistake and the audio is
fairly rapid, the voice captioner is forced to choose between taking
time to delete the mistake and then catching up by paraphrasing the
speaker, or to keep up with the speaker while letting the mistake
stand.

* Also, the skill of echoing previously spoken words aloud while
listening to a steady stream of incoming words can be quite tricky,
especially when the audio quality is less than perfect; unlike
simultaneous writing and listening, simultaneous speaking and
listening can cause cross-channel interference.

So yeah.

Low or moderate accuracy offline voice writing = short training
period, most people can do it.

Low or moderate accuracy realtime voice writing = somewhat longer
training period, machine-compatible voice timbre and accent required.

CART-level accuracy realtime voice writing = extremely long training
period, an enormous amount of talent and dedication required.

This is why steno hasn't been supplanted yet, and isn't likely to be,
as long as CART clients refuse to accept inaccurate realtime.

More here: http://stenoknight.com/VoiceVersusCART.html
And here: http://plover.stenoknight.com/2010/06/cart-court-and-captioning.html

Krzysztof Smirnow

unread,
Feb 3, 2012, 5:05:11 AM2/3/12
to plove...@googlegroups.com
Hellooo (read: hell-low)

2012/2/3 Mirabai Knight <askel...@gmail.com>

So yeah.

Low or moderate accuracy offline voice writing = short training
period, most people can do it.

Low or moderate accuracy realtime voice writing = somewhat longer
training period, machine-compatible voice timbre and accent required.

CART-level accuracy realtime voice writing = extremely long training
period, an enormous amount of talent and dedication required.

This is why steno hasn't been supplanted yet, and isn't likely to be,
as long as CART clients refuse to accept inaccurate realtime.

More here: http://stenoknight.com/VoiceVersusCART.html
And here: http://plover.stenoknight.com/2010/06/cart-court-and-captioning.html

That's what I thought so. I had several discussions with people asking, why to learn stenography (whatever, written or typed), if soon there will be the software speech-to-text? And why to work on building the project of   Polish System of Stenotypy (PLSS)? Well, it have been still soon, perhaps since more than 10 years, that's primo. Secundo, the most research is done for English, although perhaps the most (or at least one of) innovative marks in this branch is Polish IVONA, they work on English translations. Other languages are still far away. 

So, from my point of view, it's still meaningful to build the PLSS with support of working in UTF-8 Plover.
I hope, my situation will be temporarily improved, so I would be able to contribute to the Plover to become an equal participant.

Greetings

Krzysztof Smirnow

--
::| Nondum lingua suum, dextra peregit opus |::
flame...@gmail.com


Danishkadah

unread,
Feb 7, 2012, 1:52:26 PM2/7/12
to Plover
Thank Mirabai your detail replied helped much.

Akram
Reply all
Reply to author
Forward
0 new messages