Digby
Thomas wrote:
> >MOD files have Amiga sound samples in them. Amiga sound samples are
> >signed, ranging from -128 to +127; Windows PCM 8-bit samples are unsigned,
> >0 to 255. Add 0x80 to all the sample bytes to solve your problem.
> >
> Ok, I used the term module player but more correctly it would be a song
> player. I'm not loading .MOD files. The files I'm loading have PCM sounds so
> it should be correctly loaded but the sound still sounds crappy.
From personal experience, trying to use DirectSound is very easy and I think as easy
as using the waveOut functions.
Digby
Avery Lee wrote in message <3587654...@news.concentric.net>...
>Most sound cards have at most two channels, one 16-bit and one 8-bit; and
I thought of channels as being the maximum number digital sound sources the
card itself was able to handle at one time. Like in mod trackers for
instance. Some have support for 16 channels, others support up to 32
channels. And the Gravis UltraSound for instance. You don't have to mix
sounds in software, you just point out where in memory the different sounds
are stored, set some values and the sound chip will cycle through each
channel playing one part of a sound at a time. I thought something like this
was implemented in the Windows drivers? I mean, not that simple but at least
make it possible to use hardware mixing. Or is that where DirectSound comes
in?
>You'll have to mix the sound data yourself. Among the easiest is to add
>all the samples together, reducing volume as necessary, and clipping the
>overly loud peaks of the output. Usually this is done in small chunks,
>because the more data you mix at once, the more data you need ahead of
>time, and thus the longer latency you get between initiation and output.
>And when you get into stereo, you can do neat mixing tricks like panning.
>There's a DLL floating around called WAVEMIX.DLL that may be of some help,
>but I don't remember where I found it or if it's any good.
So, if I want to use the waveform audio API I have to use software mixing.
No way around? What are the waveOutSetPitch(), waveOutSetPlaybackRate()
functions used for? Just setting the pitch and playback rate for the output
device at playback time?
Ok, so I have mixed the sounds, how do I change pitch or volume for one
sound during playback? Guess I'll have to mix very small chunks of sound
data but won't this take up unbeliveably much system resources? Are there
any limits of the size of data I can send to a waveform-audio output device?
Or is it just limited by the total size of virtual adress space?
>>Are the mixer*
>>functions of any use?
>
>mixer* functions control the hardware source mixer, and only allow you to
>control volume, balance, etc. They work with sources such as Wave, CD,
>Line In, etc. and not necessarily multiple digital data sources. Even if
>they are any help, the mixer subsystem is a royal mess, and I don't
>recommend its documentation to anyone that I don't sorely hate. ;-)
Thank you for not recommending them to me then ;p Sorry for asking all these
questions but I'm sorta anxious getting this thing to work and the docs in
the Win32 SDK isn't too good, so I have to bug others with my mess asking
for help and sources :) Thanks!
You're right. Software mixing, here I come.. bluah :)
>Yup. Boring, eh? If you want multiple sounds played at different sampling
>rates, you'll need to resample them to the output rate. A fixed-point DDA
>for each sample works nicely.
And DDA = ?
>No, you can mix good-sized chunks, like 1024 bytes, which won't take too
>much time. If you can stand a constant latency, you can watch the wave
>position when you want to initiate the sound, and offset it by that much
Consider this. I load a couple of sounds from a module, start reading the
pattern table and find out that sample 1 is played for about 1 second then
its volume is changed while sample 2 is played unchanged. I would have to
remix the sounds each time one of the sounds changes attributes, right? So
that could mean VERY small "chuncks" of sound data sent very rapidly to the
output device. Will this really work? :)
>As for limits, there are in hardware but I don't think there are with
>Windows. 16k is fine and I doubt Windows would complain about pushing 64k
>at a time. You can also allocate multiple buffers and queue them up; I
>think I've had as many as 32 buffers of 16k pending at once.
Windows and memory limits rules :) As for DOS, erhm... Anyway, I could
allocate memory for each sound big enough to hold the entire sound data then
send mixed portions of this to the output device and every time the
attributes of a sound changes, I would have to remix and continue output
right? So, then I should stay atleast one mixed buffer or two ahead (double
or tripple buffering?)?