Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

waveOutOpen and waveOutWrite, please help!

184 views
Skip to first unread message

Digby

unread,
Jun 15, 1998, 3:00:00 AM6/15/98
to

1. Okay - you should still trying changing your data from signed to unsigned or
vice versa. If they are 8 bit samples add 0x80. If they are 16 bit samples add
0x8000.
2. It is not enough to say you have a file of "raw data". There is no universal
standard of what raw audio data is. You must know how many bits there are per
sample, the sample frequency, how many channels, whether the samples are signed
or unsigned etc. All this information is usually put in a a header. Many audio
file formats including wave, aiff, etc are usually just "raw data" with a
header. Sometimes a wave file is more complicated than this. It can contain
other data besides raw audio samples. Avery is correct when he said you shouldnt
use standard C or C++ file reads to read a wave file. You should use the mmio
routines. The best way to do this is look it up in MSDN.

Digby


Thomas wrote:

> >MOD files have Amiga sound samples in them. Amiga sound samples are
> >signed, ranging from -128 to +127; Windows PCM 8-bit samples are unsigned,
> >0 to 255. Add 0x80 to all the sample bytes to solve your problem.
> >
> Ok, I used the term module player but more correctly it would be a song
> player. I'm not loading .MOD files. The files I'm loading have PCM sounds so
> it should be correctly loaded but the sound still sounds crappy.


Digby

unread,
Jun 15, 1998, 3:00:00 AM6/15/98
to

BTW

From personal experience, trying to use DirectSound is very easy and I think as easy
as using the waveOut functions.
Digby


Thomas

unread,
Jun 17, 1998, 3:00:00 AM6/17/98
to

Avery Lee wrote in message <3587654...@news.concentric.net>...
>Most sound cards have at most two channels, one 16-bit and one 8-bit; and


I thought of channels as being the maximum number digital sound sources the
card itself was able to handle at one time. Like in mod trackers for
instance. Some have support for 16 channels, others support up to 32
channels. And the Gravis UltraSound for instance. You don't have to mix
sounds in software, you just point out where in memory the different sounds
are stored, set some values and the sound chip will cycle through each
channel playing one part of a sound at a time. I thought something like this
was implemented in the Windows drivers? I mean, not that simple but at least
make it possible to use hardware mixing. Or is that where DirectSound comes
in?

>You'll have to mix the sound data yourself. Among the easiest is to add
>all the samples together, reducing volume as necessary, and clipping the
>overly loud peaks of the output. Usually this is done in small chunks,
>because the more data you mix at once, the more data you need ahead of
>time, and thus the longer latency you get between initiation and output.
>And when you get into stereo, you can do neat mixing tricks like panning.
>There's a DLL floating around called WAVEMIX.DLL that may be of some help,
>but I don't remember where I found it or if it's any good.


So, if I want to use the waveform audio API I have to use software mixing.
No way around? What are the waveOutSetPitch(), waveOutSetPlaybackRate()
functions used for? Just setting the pitch and playback rate for the output
device at playback time?
Ok, so I have mixed the sounds, how do I change pitch or volume for one
sound during playback? Guess I'll have to mix very small chunks of sound
data but won't this take up unbeliveably much system resources? Are there
any limits of the size of data I can send to a waveform-audio output device?
Or is it just limited by the total size of virtual adress space?

>>Are the mixer*
>>functions of any use?
>
>mixer* functions control the hardware source mixer, and only allow you to
>control volume, balance, etc. They work with sources such as Wave, CD,
>Line In, etc. and not necessarily multiple digital data sources. Even if
>they are any help, the mixer subsystem is a royal mess, and I don't
>recommend its documentation to anyone that I don't sorely hate. ;-)


Thank you for not recommending them to me then ;p Sorry for asking all these
questions but I'm sorta anxious getting this thing to work and the docs in
the Win32 SDK isn't too good, so I have to bug others with my mess asking
for help and sources :) Thanks!

Thomas

unread,
Jun 18, 1998, 3:00:00 AM6/18/98
to

>To use those capabilities you most likely need to upload your sounds to the
>sound card, which limits how many sounds you can simultaneously use. Not
>only that, but not everyone has a card that card that can do this, and even
>those that do, don't necessarily put memory in it (how many people have
>AWE32/AWE64 boards with no memory in them?).


You're right. Software mixing, here I come.. bluah :)

>Yup. Boring, eh? If you want multiple sounds played at different sampling
>rates, you'll need to resample them to the output rate. A fixed-point DDA
>for each sample works nicely.


And DDA = ?

>No, you can mix good-sized chunks, like 1024 bytes, which won't take too
>much time. If you can stand a constant latency, you can watch the wave
>position when you want to initiate the sound, and offset it by that much


Consider this. I load a couple of sounds from a module, start reading the
pattern table and find out that sample 1 is played for about 1 second then
its volume is changed while sample 2 is played unchanged. I would have to
remix the sounds each time one of the sounds changes attributes, right? So
that could mean VERY small "chuncks" of sound data sent very rapidly to the
output device. Will this really work? :)

>As for limits, there are in hardware but I don't think there are with
>Windows. 16k is fine and I doubt Windows would complain about pushing 64k
>at a time. You can also allocate multiple buffers and queue them up; I
>think I've had as many as 32 buffers of 16k pending at once.


Windows and memory limits rules :) As for DOS, erhm... Anyway, I could
allocate memory for each sound big enough to hold the entire sound data then
send mixed portions of this to the output device and every time the
attributes of a sound changes, I would have to remix and continue output
right? So, then I should stay atleast one mixed buffer or two ahead (double
or tripple buffering?)?

Thomas

unread,
Jun 18, 1998, 3:00:00 AM6/18/98
to

Thanks a lot for your help Avery. You've been very helpful! I think it's
time for me to dig down into some Windows Multimedia programming
documentation and start creating that mixer routine of mine. Expect me to be
back :)

0 new messages