Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Sine Lookup Table with Linear Interpolation

906 views
Skip to first unread message

rickman

unread,
Mar 16, 2013, 6:30:51 PM3/16/13
to
I've been studying an approach to implementing a lookup table (LUT) to
implement a sine function. The two msbs of the phase define the
quadrant. I have decided that an 8 bit address for a single quadrant is
sufficient with an 18 bit output. Another 11 bits of phase will give me
sufficient resolution to interpolate the sin() to 18 bits.

If you assume a straight line between the two endpoints the midpoint of
each interpolated segment will have an error of
((Sin(high)-sin(low))/2)-sin(mid)

Without considering rounding, this reaches a maximum at the last segment
before the 90 degree point. I calculate about 4-5 ppm which is about
the same as the quantization of an 18 bit number.

There are two issues I have not come across regarding these LUTs. One
is adding a bias to the index before converting to the sin() value for
the table. Some would say the index 0 represents the phase 0 and the
index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
LUT inefficient, especially in hardware. If a bias of half the lsb is
added to the index before converting to a sin() value the value 0 to
2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
table properly. I assume this is commonly done and I just can't find a
mention of it.

The other issue is how to calculate the values for the table to give the
best advantage to the linear interpolation. Rather than using the exact
match to the end points stored in the table, an adjustment could be done
to minimize the deviation over each interpolated segment. Without this,
the errors are always in the same direction. With an adjustment the
errors become bipolar and so will reduce the magnitude by half (approx).
Is this commonly done? It will require a bit of computation to get
these values, but even a rough approximation should improve the max
error by a factor of two to around 2-3 ppm.

Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
advantage of this resolution! lol

One thing I learned while doing this is that near 0 degrees the sin()
function is linear (we all knew that, right?) but near 90 degrees, the
sin() function is essentially quadratic. Who would have thunk it?

--

Rick

robert bristow-johnson

unread,
Mar 16, 2013, 7:13:26 PM3/16/13
to

this *should* be a relatively simple issue, but i am confused

On 3/16/13 6:30 PM, rickman wrote:
> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function. The two msbs of the phase define the
> quadrant. I have decided that an 8 bit address for a single quadrant is
> sufficient with an 18 bit output.

10 bits or 1024 points. since you're doing linear interpolation, add
one more, copy the zeroth point x[0] to the last x[1024] so you don't
have to do any modulo (by ANDing with (1023-1) on the address of the
second point. (probably not necessary for hardware implementation.)


x[n] = sin( (pi/512)*n ) for 0 <= n <= 1024

> Another 11 bits of phase will give me
> sufficient resolution to interpolate the sin() to 18 bits.

so a 21 bit total index. your frequency resolution would be 2^(-21) in
cycles per sampling period or 2^(-21) * Fs. those 2 million values
would be the only frequencies you can meaningful

>
> If you assume a straight line between the two endpoints the midpoint of
> each interpolated segment will have an error of
> ((Sin(high)-sin(low))/2)-sin(mid)
>

do you mean "+" instead of the first "-"? to be explicit:

( sin((pi/512)*(n+1)) + sin((pi/512)*n) )/2 - sin((pi/512)*(n+0.5))

that's the error in the middle. dunno if it's the max error, but it's
might be.

> Without considering rounding, this reaches a maximum at the last segment
> before the 90 degree point.

at both the 90 and 270 degree points. (or just before and after those
points.)

> I calculate about 4-5 ppm which is about the
> same as the quantization of an 18 bit number.
>
> There are two issues I have not come across regarding these LUTs. One is
> adding a bias to the index before converting to the sin() value for the
> table. Some would say the index 0 represents the phase 0 and the index
> 2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT
> inefficient, especially in hardware. If a bias of half the lsb is added
> to the index before converting to a sin() value the value 0 to 2^n-1
> becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table
> properly. I assume this is commonly done and I just can't find a mention
> of it.

do you mean biasing by 1/2 of a point? then your max error will be *at*
the 90 and 270 degree points and it will be slightly more than what you
had before.

>
> The other issue is how to calculate the values for the table to give the
> best advantage to the linear interpolation. Rather than using the exact
> match to the end points stored in the table, an adjustment could be done
> to minimize the deviation over each interpolated segment. Without this,
> the errors are always in the same direction. With an adjustment the
> errors become bipolar and so will reduce the magnitude by half (approx).
> Is this commonly done? It will require a bit of computation to get these
> values, but even a rough approximation should improve the max error by a
> factor of two to around 2-3 ppm.

if you assume an approximate quadratic behavior over that short segment,
you can compute the straight line where the error in the middle is equal
in magnitude (and opposite in sign) to the error at the end points.
that's a closed form solution, i think.

dunno if that is what you actually want for a sinusoidal waveform
generator. i might think you want to minimize the mean square error.

> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
> advantage of this resolution! lol
>
> One thing I learned while doing this is that near 0 degrees the sin()
> function is linear (we all knew that, right?) but near 90 degrees, the
> sin() function is essentially quadratic. Who would have thunk it?

Newton? Leibnitz? Gauss?

sin(t + pi/2) = cos(t)


--

r b-j r...@audioimagination.com

"Imagination is more important than knowledge."


Robert Baer

unread,
Mar 16, 2013, 8:55:34 PM3/16/13
to
Sounds like you are making excellent improvements on the standard
ho-hum algorithms; the net result will be superior to anything done out
there (commercially).
With the proper offsets, one needs only 22.5 degrees of lookup
("bounce" off each multiple of 45 degrees).

lang...@fonz.dk

unread,
Mar 16, 2013, 8:02:10 PM3/16/13
to

glen herrmannsfeldt

unread,
Mar 16, 2013, 8:40:25 PM3/16/13
to
In comp.dsp rickman <gnu...@gmail.com> wrote:

> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function. The two msbs of the phase define the
> quadrant. I have decided that an 8 bit address for a single quadrant is
> sufficient with an 18 bit output. Another 11 bits of phase will give me
> sufficient resolution to interpolate the sin() to 18 bits.

In the early days of MOS ROMs, there were commercial such ROMs,
I believe from National. (And when ROMs were much smaller than today.)

The data sheet has (I can't find it right now, but it is around
somewere) the combination of ROMs and TTL adders to do the
interpolation.

> If you assume a straight line between the two endpoints the midpoint of
> each interpolated segment will have an error of
> ((Sin(high)-sin(low))/2)-sin(mid)

> Without considering rounding, this reaches a maximum at the last segment
> before the 90 degree point. I calculate about 4-5 ppm which is about
> the same as the quantization of an 18 bit number.

I presume the ROM designers had all this figured out.

> There are two issues I have not come across regarding these LUTs. One
> is adding a bias to the index before converting to the sin() value for
> the table. Some would say the index 0 represents the phase 0 and the
> index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
> LUT inefficient, especially in hardware. If a bias of half the lsb is
> added to the index before converting to a sin() value the value 0 to
> 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
> table properly. I assume this is commonly done and I just can't find a
> mention of it.

I don't remember anymore. But since you have the additional bit to
do the interpolation, it should be easy.

> The other issue is how to calculate the values for the table to give the
> best advantage to the linear interpolation. Rather than using the exact
> match to the end points stored in the table, an adjustment could be done
> to minimize the deviation over each interpolated segment. Without this,
> the errors are always in the same direction. With an adjustment the
> errors become bipolar and so will reduce the magnitude by half (approx).
> Is this commonly done? It will require a bit of computation to get
> these values, but even a rough approximation should improve the max
> error by a factor of two to around 2-3 ppm.

I am not sure how you are thinking about doing it. I believe that
some of the bits that go into the MSB ROM also go into the lower ROM
to select the interpolation slope, and then additional bits to select
the actual value. Say you have a 1024x10 ROM for the first one,
then want to interpolate that. How many bits of linear interpolation
can be done? (That is, before linear isn't close enough any more.)
Then the appropriate number of low bits and high bits, but not the
in between bits, go into the interpolation ROM, which is then added
to the other ROMs output.

> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
> advantage of this resolution! lol

> One thing I learned while doing this is that near 0 degrees the sin()
> function is linear (we all knew that, right?) but near 90 degrees, the
> sin() function is essentially quadratic. Who would have thunk it?

Well, cos() is known to be quadratic near zero.

I once knew someone with a homework problem something like:

Take a string all the way around the earth at the equator, and add
(if I remember right) 3m. (Assuming the earth is a perfect sphere.)
If the string is at a uniform height around the earth, how far from
the surface is it?

Now, pull it up at one point. How far is that point above the surface?
More specifically, as I originally heard it, is it higher than the
height of a specific nine story library?

As I remember it, the usual small angle approximations to trig.
functions aren't enough to do this. The next term is needed.
The 3m added might not be right, as the answer is close enough
to the certain building height to need the additional term.

-- glen

Vladimir Vassilevsky

unread,
Mar 16, 2013, 9:19:53 PM3/16/13
to
On 3/16/2013 5:30 PM, rickman wrote:
> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function. The two msbs of the phase define the
> quadrant. I have decided that an 8 bit address for a single quadrant is
> sufficient with an 18 bit output. Another 11 bits of phase will give me
> sufficient resolution to interpolate the sin() to 18 bits.

An optimal one quadrant LUT with 256 entries and linear interpolation
makes for sine approximation with accuracy about 16 bits.
If there is a need for better precision, consider different approaches.

Vladimir Vassilevsky
DSP and Mixed Signal Designs
www.abvolt.com



glen herrmannsfeldt

unread,
Mar 17, 2013, 3:20:21 AM3/17/13
to
In comp.dsp rickman <gnu...@gmail.com> wrote:
> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function. The two msbs of the phase define the
> quadrant. I have decided that an 8 bit address for a single quadrant is
> sufficient with an 18 bit output. Another 11 bits of phase will give me
> sufficient resolution to interpolate the sin() to 18 bits.

> If you assume a straight line between the two endpoints the midpoint of
> each interpolated segment will have an error of
> ((Sin(high)-sin(low))/2)-sin(mid)

See:

http://ia601506.us.archive.org/8/items/bitsavers_nationaldaMOSIntegratedCircuits_20716690/1972_National_MOS_Integrated_Circuits.pdf

The description starts on page 273.

-- glen

Nobody

unread,
Mar 17, 2013, 12:35:15 PM3/17/13
to
On Sat, 16 Mar 2013 18:30:51 -0400, rickman wrote:

> There are two issues I have not come across regarding these LUTs. One
> is adding a bias to the index before converting to the sin() value for
> the table. Some would say the index 0 represents the phase 0 and the
> index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
> LUT inefficient, especially in hardware.

An n-bit table has 2^n+1 entries for 2^n ranges. Range i has endpoints of
table[i] and table[i+1]. The final range has i=(1<<n)-1, so the last
entry in the table is table[1<<n], not table[(1<<n)-1].

> One thing I learned while doing this is that near 0 degrees the sin()
> function is linear (we all knew that, right?) but near 90 degrees, the
> sin() function is essentially quadratic. Who would have thunk it?

sin((pi/2)+x) = sin((pi/2)-x) = cos(x), and the Maclaurin series for
cos(x) is:

cos(x) = 1 - (x^2)/2! + (x^4)/4! - ...


rickman

unread,
Mar 17, 2013, 5:49:23 PM3/17/13
to
On 3/16/2013 7:13 PM, robert bristow-johnson wrote:
>
> this *should* be a relatively simple issue, but i am confused
>
> On 3/16/13 6:30 PM, rickman wrote:
>> I've been studying an approach to implementing a lookup table (LUT) to
>> implement a sine function. The two msbs of the phase define the
>> quadrant. I have decided that an 8 bit address for a single quadrant is
>> sufficient with an 18 bit output.
>
> 10 bits or 1024 points. since you're doing linear interpolation, add one
> more, copy the zeroth point x[0] to the last x[1024] so you don't have
> to do any modulo (by ANDing with (1023-1) on the address of the second
> point. (probably not necessary for hardware implementation.)
>
>
> x[n] = sin( (pi/512)*n ) for 0 <= n <= 1024

So you are suggesting a table with 2^n+1 entries? Not such a great idea
in some apps, like hardware. What is the advantage? Also, why 10 bit
address for a 1024 element table? My calculations indicate a linear
interpolation can be done with 4 ppm accuracy with a 256 element LUT.
I'm not completely finished my simulation, but I'm pretty confident this
much is corrrect.


>> Another 11 bits of phase will give me
>> sufficient resolution to interpolate the sin() to 18 bits.
>
> so a 21 bit total index. your frequency resolution would be 2^(-21) in
> cycles per sampling period or 2^(-21) * Fs. those 2 million values would
> be the only frequencies you can meaningful

No, that is the phase sent to the LUT. The total phase accumulator can
be larger as the need requires.


>> If you assume a straight line between the two endpoints the midpoint of
>> each interpolated segment will have an error of
>> ((Sin(high)-sin(low))/2)-sin(mid)
>>
>
> do you mean "+" instead of the first "-"? to be explicit:
>
> ( sin((pi/512)*(n+1)) + sin((pi/512)*n) )/2 - sin((pi/512)*(n+0.5))
>
> that's the error in the middle. dunno if it's the max error, but it's
> might be.

Yes, thanks for the correction. The max error? I'm not so worried
about that exactly. The error is a curve with the max magnitude near
the middle if nothing further is done to minimize it.


>> Without considering rounding, this reaches a maximum at the last segment
>> before the 90 degree point.
>
> at both the 90 and 270 degree points. (or just before and after those
> points.)

I'm talking about the LUT. The LUT only considers the first quadrant.


>> I calculate about 4-5 ppm which is about the
>> same as the quantization of an 18 bit number.
>>
>> There are two issues I have not come across regarding these LUTs. One is
>> adding a bias to the index before converting to the sin() value for the
>> table. Some would say the index 0 represents the phase 0 and the index
>> 2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT
>> inefficient, especially in hardware. If a bias of half the lsb is added
>> to the index before converting to a sin() value the value 0 to 2^n-1
>> becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table
>> properly. I assume this is commonly done and I just can't find a mention
>> of it.
>
> do you mean biasing by 1/2 of a point? then your max error will be *at*
> the 90 and 270 degree points and it will be slightly more than what you
> had before.

No, not quite right. There is a LUT with points spaced at 90/255
degrees apart starting at just above 0 degrees. The values between
points in the table are interpolated with a maximum deviation near the
center of the interpolation. Next to 90 degrees the interpolation is
using the maximum interpolation factor which will result in a value as
close as you can get to the correct value if the end points are used to
construct the interpolation line. 90 degrees itself won't actually be
represented, but rather points on either side, 90�delta where delta is
360� / 2^(n+1) with n being the number of bits in the input to the sin
function.


>> The other issue is how to calculate the values for the table to give the
>> best advantage to the linear interpolation. Rather than using the exact
>> match to the end points stored in the table, an adjustment could be done
>> to minimize the deviation over each interpolated segment. Without this,
>> the errors are always in the same direction. With an adjustment the
>> errors become bipolar and so will reduce the magnitude by half (approx).
>> Is this commonly done? It will require a bit of computation to get these
>> values, but even a rough approximation should improve the max error by a
>> factor of two to around 2-3 ppm.
>
> if you assume an approximate quadratic behavior over that short segment,
> you can compute the straight line where the error in the middle is equal
> in magnitude (and opposite in sign) to the error at the end points.
> that's a closed form solution, i think.

Yes, it is a little tricky because at this point we are working with
integer math (or technically fixed point I suppose). Rounding errors is
what this is all about. I've done some spreadsheet simulations and I
have some pretty good results. I updated it a bit to generalize it to
the LUT size and I keep getting the same max error counts (adjusted to
work with integers rather than fractions) �3 no matter what the size of
the interpolation factor. I don't expect this and I think I have
something wrong in the calculations. I'll need to resolve this.


> dunno if that is what you actually want for a sinusoidal waveform
> generator. i might think you want to minimize the mean square error.

We are talking about the lsbs of a 20+ bit word. Do you think there
will be much of a difference in result? I need to actually be able to
do the calculations and get this done rather than continue to work on
the process. Also, each end point affects two lines, so there are
tradeoffs, make one better and the other worse? It seems to get
complicated very quickly.


>> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
>> advantage of this resolution! lol
>>
>> One thing I learned while doing this is that near 0 degrees the sin()
>> function is linear (we all knew that, right?) but near 90 degrees, the
>> sin() function is essentially quadratic. Who would have thunk it?
>
> Newton? Leibnitz? Gauss?
>
> sin(t + pi/2) = cos(t)

How does that imply a quadratic curve at 90 degrees? At least I think
like the greats!

--

Rick

rickman

unread,
Mar 17, 2013, 5:56:21 PM3/17/13
to
I'm not sure this would be easier. The LUT an interpolation require a
reasonable amount of logic. This requires raising X to the 5th power
and five constant multiplies. My FPGA doesn't have multipliers and this
may be too much for the poor little chip. I suppose I could do some of
this with a LUT and linear interpolation... lol

--

Rick

glen herrmannsfeldt

unread,
Mar 17, 2013, 6:02:31 PM3/17/13
to
In comp.dsp rickman <gnu...@gmail.com> wrote:
> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function.

(snip)

> If you assume a straight line between the two endpoints the midpoint of
> each interpolated segment will have an error of
> ((Sin(high)-sin(low))/2)-sin(mid)

Seems what they instead do is implement

sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less
significant bits of theta. Also, cos(L) tends to be almost 1,
so they just say it is 1.

(snip)

> There are two issues I have not come across regarding these LUTs. One
> is adding a bias to the index before converting to the sin() value for
> the table. Some would say the index 0 represents the phase 0 and the
> index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
> LUT inefficient, especially in hardware. If a bias of half the lsb is
> added to the index before converting to a sin() value the value 0 to
> 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
> table properly. I assume this is commonly done and I just can't find a
> mention of it.

Well, you can fix it in various ways, but you definitely want 2^n.

> The other issue is how to calculate the values for the table to give the
> best advantage to the linear interpolation. Rather than using the exact
> match to the end points stored in the table, an adjustment could be done
> to minimize the deviation over each interpolated segment. Without this,
> the errors are always in the same direction. With an adjustment the
> errors become bipolar and so will reduce the magnitude by half (approx).
> Is this commonly done? It will require a bit of computation to get
> these values, but even a rough approximation should improve the max
> error by a factor of two to around 2-3 ppm.

Seems like that is one of the suggestions, but not done in the ROMs
they were selling. Then the interpolation has to add or subtract,
which is slightly (in TTL) harder.

The interpolated sine was done in 1970 with 128x8 ROMs. With larger
ROMs, like usual today, you shouldn't need it unless you want really
high resolution.

> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
> advantage of this resolution! lol

> One thing I learned while doing this is that near 0 degrees the sin()
> function is linear (we all knew that, right?) but near 90 degrees, the
> sin() function is essentially quadratic. Who would have thunk it?

-- glen

rickman

unread,
Mar 17, 2013, 6:02:27 PM3/17/13
to
I can't say I follow this. How do I get a 22.5 degree sine table to
expand to 90 degrees?

As to my improvements, I can see where most implementations don't bother
pushing the limits of the hardware, but I can't imagine no one has done
this before. I'm sure there are all sorts of apps, especially in older
hardware where resources were constrained, where they pushed for every
drop of precision they could get. What about space apps? My
understanding is they *always* push their designs to be the best they
can be.

--

Rick

rickman

unread,
Mar 17, 2013, 6:07:18 PM3/17/13
to
Yeah, a friend of mine knows these polynomials inside and out. I never
learned that stuff so well. At the time it didn't seem to have a lot of
use and later there were always better ways of getting the answer. From
looking at the above I see the X^2 term will dominate over the higher
order terms near 0, but of course the lowest order term, 1, will be the
truly dominate term... lol In other words, the function of cos(x) near
0 is just the constant 1 to the first order approximation.

--

Rick

rickman

unread,
Mar 17, 2013, 6:17:11 PM3/17/13
to
On 3/17/2013 6:02 PM, glen herrmannsfeldt wrote:
> In comp.dsp rickman<gnu...@gmail.com> wrote:
>> I've been studying an approach to implementing a lookup table (LUT) to
>> implement a sine function.
>
> (snip)
>
>> If you assume a straight line between the two endpoints the midpoint of
>> each interpolated segment will have an error of
>> ((Sin(high)-sin(low))/2)-sin(mid)
>
> Seems what they instead do is implement
>
> sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less
> significant bits of theta. Also, cos(L) tends to be almost 1,
> so they just say it is 1.

Interesting. This tickles a few grey cells. Two values based solely on
the MS portion of x and a value based solely on the LS portion. Two
tables, three lookups a multiply and an add. That could work. The
value for sin(M) would need to be full precision, but I assume the
values of sin(L) could have less range because sin(L) will always be a
small value...


>> There are two issues I have not come across regarding these LUTs. One
>> is adding a bias to the index before converting to the sin() value for
>> the table. Some would say the index 0 represents the phase 0 and the
>> index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
>> LUT inefficient, especially in hardware. If a bias of half the lsb is
>> added to the index before converting to a sin() value the value 0 to
>> 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
>> table properly. I assume this is commonly done and I just can't find a
>> mention of it.
>
> Well, you can fix it in various ways, but you definitely want 2^n.

I think one of the other posters was saying to add an entry for 90
degrees. I don't like that. I could be done but it complicates the
process of using a table for 0-90 degrees.


>> The other issue is how to calculate the values for the table to give the
>> best advantage to the linear interpolation. Rather than using the exact
>> match to the end points stored in the table, an adjustment could be done
>> to minimize the deviation over each interpolated segment. Without this,
>> the errors are always in the same direction. With an adjustment the
>> errors become bipolar and so will reduce the magnitude by half (approx).
>> Is this commonly done? It will require a bit of computation to get
>> these values, but even a rough approximation should improve the max
>> error by a factor of two to around 2-3 ppm.
>
> Seems like that is one of the suggestions, but not done in the ROMs
> they were selling. Then the interpolation has to add or subtract,
> which is slightly (in TTL) harder.
>
> The interpolated sine was done in 1970 with 128x8 ROMs. With larger
> ROMs, like usual today, you shouldn't need it unless you want really
> high resolution.

I'm not using an actual ROM chip. This is block ram in a rather small
FPGA with only six blocks. I need two channels and may want to use some
of the blocks for other functions.


>> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
>> advantage of this resolution! lol
>
>> One thing I learned while doing this is that near 0 degrees the sin()
>> function is linear (we all knew that, right?) but near 90 degrees, the
>> sin() function is essentially quadratic. Who would have thunk it?
>
> -- glen

Thanks for the advice to everyone.

--

Rick

John Larkin

unread,
Mar 17, 2013, 6:20:44 PM3/17/13
to
We have a few products that do DDS with 4K point, 16-bit waveform tables (full
360 degrees if we do a sine wave) and optional linear interpolation at every DAC
clock, usually 128 MHz. At lower frequencies, when straight DDS would normally
output the same table point many times, interpolation gives us a linear ramp of
points, one every 8 ns. It helps frequency domain specs a little. We didn't try
fudging the table to reduce the max midpoint error... too much thinking.

It would be interesting to distort the table entries to minimize THD. The
problem there is how to measure very low THD to close the loop.


--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators

lang...@fonz.dk

unread,
Mar 17, 2013, 6:23:26 PM3/17/13
to
On Mar 17, 10:56 pm, rickman <gnu...@gmail.com> wrote:
> >http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/AD...
>
> I'm not sure this would be easier.  The LUT an interpolation require a
> reasonable amount of logic.  This requires raising X to the 5th power
> and five constant multiplies.  My FPGA doesn't have multipliers and this
> may be too much for the poor little chip.  I suppose I could do some of
> this with a LUT and linear interpolation... lol
>

I see, I assumed you had an MCU (or fpga) with multipliers

considered CORDIC?


-Lasse

rickman

unread,
Mar 17, 2013, 6:35:37 PM3/17/13
to
I should learn that. I know it has been used to good effect in FPGAs.
Right now I am happy with the LUT, but I would like to learn more about
CORDIC. Any useful referrences?

--

Rick

rickman

unread,
Mar 17, 2013, 6:44:07 PM3/17/13
to
On 3/17/2013 6:20 PM, John Larkin wrote:
>
> It would be interesting to distort the table entries to minimize THD. The
> problem there is how to measure very low THD to close the loop.

It could also be a difficult problem to solve without some sort of
exhaustive search. Each point you fudge affects two curve segments and
each curve segment is affected by two points. So there is some degree
of interaction.

Anyone know if there is a method to solve such a problem easily? I am
sure that my current approach gets each segment end point to within �1
lsb and I suppose once you measure THD in some way it would not be an
excessive amount of work to tune the 256 end points in a few passes
through. This sounds like a tree search with pruning.

I'm assuming there would be no practical closed form solution. But
couldn't the THD be calculated in a simulation rather than having to be
measured on the bench?

--

Rick

Jon Kirwan

unread,
Mar 17, 2013, 6:54:06 PM3/17/13
to
On Sun, 17 Mar 2013 17:49:23 -0400, rickman
<gnu...@gmail.com> wrote:

>On 3/16/2013 7:13 PM, robert bristow-johnson wrote:
>>
>> On 3/16/13 6:30 PM, rickman wrote:
>>><snip>
>>> One thing I learned while doing this is that near 0 degrees the sin()
>>> function is linear (we all knew that, right?) but near 90 degrees, the
>>> sin() function is essentially quadratic. Who would have thunk it?
>>
>> Newton? Leibnitz? Gauss?
>>
>> sin(t + pi/2) = cos(t)
>
>How does that imply a quadratic curve at 90 degrees? At least I think
>like the greats!

You already know why. Just not wearing your hat right now.
Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at
sin(x) is 1 + x. One is quadratic, one is linear.

Jon

John Larkin

unread,
Mar 17, 2013, 6:57:34 PM3/17/13
to
On Sun, 17 Mar 2013 18:44:07 -0400, rickman <gnu...@gmail.com> wrote:

>On 3/17/2013 6:20 PM, John Larkin wrote:
>>
>> It would be interesting to distort the table entries to minimize THD. The
>> problem there is how to measure very low THD to close the loop.
>
>It could also be a difficult problem to solve without some sort of
>exhaustive search. Each point you fudge affects two curve segments and
>each curve segment is affected by two points. So there is some degree
>of interaction.
>
>Anyone know if there is a method to solve such a problem easily? I am
>sure that my current approach gets each segment end point to within ±1
>lsb and I suppose once you measure THD in some way it would not be an
>excessive amount of work to tune the 256 end points in a few passes
>through. This sounds like a tree search with pruning.

If I could measure the THD accurately, I'd just blast in a corrective harmonic
adder to the table for each of the harmonics. That could be scientific or just
iterative. For the 2nd harmonic, for example, add a 2nd harmonic sine component
into all the table points, many of which wouldn't even change by an LSB, to null
out the observed distortion. Gain and phase, of course.

>
>I'm assuming there would be no practical closed form solution. But
>couldn't the THD be calculated in a simulation rather than having to be
>measured on the bench?

Our experience is that the output amplifiers, after the DAC and lowpass filter,
are the nasty guys for distortion, at least in the 10s of MHz. Lots of
commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table
correction would unfortunately be amplitude dependent, so it gets messy fast.

rickman

unread,
Mar 17, 2013, 7:03:21 PM3/17/13
to
On 3/17/2013 6:57 PM, John Larkin wrote:
> On Sun, 17 Mar 2013 18:44:07 -0400, rickman<gnu...@gmail.com> wrote:
>
>> On 3/17/2013 6:20 PM, John Larkin wrote:
>>>
>>> It would be interesting to distort the table entries to minimize THD. The
>>> problem there is how to measure very low THD to close the loop.
>>
>> It could also be a difficult problem to solve without some sort of
>> exhaustive search. Each point you fudge affects two curve segments and
>> each curve segment is affected by two points. So there is some degree
>> of interaction.
>>
>> Anyone know if there is a method to solve such a problem easily? I am
>> sure that my current approach gets each segment end point to within �1
>> lsb and I suppose once you measure THD in some way it would not be an
>> excessive amount of work to tune the 256 end points in a few passes
>> through. This sounds like a tree search with pruning.
>
> If I could measure the THD accurately, I'd just blast in a corrective harmonic
> adder to the table for each of the harmonics. That could be scientific or just
> iterative. For the 2nd harmonic, for example, add a 2nd harmonic sine component
> into all the table points, many of which wouldn't even change by an LSB, to null
> out the observed distortion. Gain and phase, of course.

I don't think you are clear on this. The table and linear interp are as
good as they can get with the exception of one or possibly two lsbs to
minimize the error in the linear interp. These is no way to correct for
this by adding a second harmonic, etc. Remember, this is not a pure
table lookup, it is a two step approach.


>> I'm assuming there would be no practical closed form solution. But
>> couldn't the THD be calculated in a simulation rather than having to be
>> measured on the bench?
>
> Our experience is that the output amplifiers, after the DAC and lowpass filter,
> are the nasty guys for distortion, at least in the 10s of MHz. Lots of
> commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table
> correction would unfortunately be amplitude dependent, so it gets messy fast.

Ok, if you are compensating for the rest of the hardware, that's a
different matter...

--

Rick

Phil Hobbs

unread,
Mar 17, 2013, 7:31:38 PM3/17/13
to
The simple method would be to compute a zillion-point FFT. Since the
function is really truly periodic, the FFT gives you the right answer
for the continuous-time Fourier transform, provided you have enough
points that you can see all the relevant harmonics.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510 USA
+1 845 480 2058

hobbs at electrooptical dot net
http://electrooptical.net

Tim Williams

unread,
Mar 17, 2013, 8:03:54 PM3/17/13
to
"Jon Kirwan" <jo...@infinitefactors.org> wrote in message
news:r2ick8d61stgkekae...@4ax.com...
> You already know why. Just not wearing your hat right now.
> Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at
> sin(x) is 1 + x. One is quadratic, one is linear.

Oops, should be:
cos(x) ~= 1 - x^2/2
sin(x) ~= x (- x^3/6)
Mixed up your odd/even powers :)

Of course, the cubic is less important than the quadratic, so sin(x) ~= x is
sufficient for most purposes.

I suppose you could potentially synthesize the functions using low power
approximations (like these) and interpolating between them over modulo pi/4
segments. Probably wouldn't save many clock cycles relative to a CORDIC,
higher order polynomial, or other, for a given accuracy, so the standard
space-time tradeoff (LUT vs. computation) isn't changed.

Tim

--
Deep Friar: a very philosophical monk.
Website: http://www.seventransistorlabs.com/

John Larkin

unread,
Mar 17, 2013, 8:22:59 PM3/17/13
to
Why can't some harmonic component be added to the table points to improve
downstream distortion? Of course, you'd need a full 360 degree table, not a
folded one.

>
>
>>> I'm assuming there would be no practical closed form solution. But
>>> couldn't the THD be calculated in a simulation rather than having to be
>>> measured on the bench?
>>
>> Our experience is that the output amplifiers, after the DAC and lowpass filter,
>> are the nasty guys for distortion, at least in the 10s of MHz. Lots of
>> commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table
>> correction would unfortunately be amplitude dependent, so it gets messy fast.
>
>Ok, if you are compensating for the rest of the hardware, that's a
>different matter...

For everything, including the DAC, but especially downstream stuff. The amps and
stuff may have 100x more distortion than the lookup table does.

John Larkin

unread,
Mar 17, 2013, 8:27:19 PM3/17/13
to
On Sun, 17 Mar 2013 19:31:38 -0400, Phil Hobbs
<pcdhSpamM...@electrooptical.net> wrote:

>On 3/17/2013 6:44 PM, rickman wrote:
>> On 3/17/2013 6:20 PM, John Larkin wrote:
>>>
>>> It would be interesting to distort the table entries to minimize THD. The
>>> problem there is how to measure very low THD to close the loop.
>>
>> It could also be a difficult problem to solve without some sort of
>> exhaustive search. Each point you fudge affects two curve segments and
>> each curve segment is affected by two points. So there is some degree
>> of interaction.
>>
>> Anyone know if there is a method to solve such a problem easily? I am
>> sure that my current approach gets each segment end point to within ą1
>> lsb and I suppose once you measure THD in some way it would not be an
>> excessive amount of work to tune the 256 end points in a few passes
>> through. This sounds like a tree search with pruning.
>>
>> I'm assuming there would be no practical closed form solution. But
>> couldn't the THD be calculated in a simulation rather than having to be
>> measured on the bench?
>>
>
>The simple method would be to compute a zillion-point FFT. Since the
>function is really truly periodic, the FFT gives you the right answer
>for the continuous-time Fourier transform, provided you have enough
>points that you can see all the relevant harmonics.
>

All you need now is a digitizer or spectrum analyzer that has PPM distortion!

We've bumped into this problem, trying to characterize some of our ARBs. The
best thing is to use a passive lowpass or notch filter (with very good Ls and
Cs!) to remove the fundamental and analyze what's left.

Jon Kirwan

unread,
Mar 17, 2013, 8:49:07 PM3/17/13
to
On Sun, 17 Mar 2013 19:03:54 -0500, "Tim Williams"
<tmor...@charter.net> wrote:

>"Jon Kirwan" <jo...@infinitefactors.org> wrote in message
>news:r2ick8d61stgkekae...@4ax.com...
>> You already know why. Just not wearing your hat right now.
>> Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at
>> sin(x) is 1 + x. One is quadratic, one is linear.
>
>Oops, should be:
>cos(x) ~= 1 - x^2/2
>sin(x) ~= x (- x^3/6)
>Mixed up your odd/even powers :)

hehe. yes. But it gets the point across, either way. Detail I
didn't care a lot about at the moment.

>Of course, the cubic is less important than the quadratic, so sin(x) ~= x is
>sufficient for most purposes.

Yup.

Jon

glen herrmannsfeldt

unread,
Mar 17, 2013, 8:51:18 PM3/17/13
to
In comp.dsp rickman <gnu...@gmail.com> wrote:
(snip)
>>> I've been studying an approach to implementing a lookup table (LUT) to
>>> implement a sine function.

>> (snip)

>>> If you assume a straight line between the two endpoints the midpoint of
>>> each interpolated segment will have an error of
>>> ((Sin(high)-sin(low))/2)-sin(mid)

>> Seems what they instead do is implement

>> sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less
>> significant bits of theta. Also, cos(L) tends to be almost 1,
>> so they just say it is 1.

> Interesting. This tickles a few grey cells. Two values based solely on
> the MS portion of x and a value based solely on the LS portion. Two
> tables, three lookups a multiply and an add. That could work. The
> value for sin(M) would need to be full precision, but I assume the
> values of sin(L) could have less range because sin(L) will always be a
> small value...

The cos(M)sin(L) is one table, with some bits from each.

Well, when you do the interpolation, you first need to know the
spacing between points in the sine table. That spacing is
proportional to cos(x). So, the interpolation table is
indexed by the high four bits of M and the four bits of L.

(snip on getting to 90 degrees, or 2^n in binary.

>> Well, you can fix it in various ways, but you definitely want 2^n.

(snip)
> I think one of the other posters was saying to add an entry for 90
> degrees. I don't like that. I could be done but it complicates the
> process of using a table for 0-90 degrees.

I am not sure yet how they do that. I did notice that when adding the
interpolation value they propagate the carry, so the output can go to
the full 1.0000 instead of 0.99999. But it will do that before 90
degrees.

(snip)
>> Seems like that is one of the suggestions, but not done in the ROMs
>> they were selling. Then the interpolation has to add or subtract,
>> which is slightly (in TTL) harder.

>> The interpolated sine was done in 1970 with 128x8 ROMs. With larger
>> ROMs, like usual today, you shouldn't need it unless you want really
>> high resolution.

> I'm not using an actual ROM chip. This is block ram in a rather small
> FPGA with only six blocks. I need two channels and may want to use some
> of the blocks for other functions.

So, the one referenced uses three 256x4 ROMs to get 12 bits of sin(M)
from 8 bits of M, and then a fourth 128x8 ROM to get five bits to
add to the low bits of the 12 bit sin(M).

-- glen

robert bristow-johnson

unread,
Mar 17, 2013, 10:14:41 PM3/17/13
to
On 3/17/13 5:49 PM, rickman wrote:
> On 3/16/2013 7:13 PM, robert bristow-johnson wrote:
>>
>> this *should* be a relatively simple issue, but i am confused
>>
>> On 3/16/13 6:30 PM, rickman wrote:
>>> I've been studying an approach to implementing a lookup table (LUT) to
>>> implement a sine function. The two msbs of the phase define the
>>> quadrant. I have decided that an 8 bit address for a single quadrant is
>>> sufficient with an 18 bit output.
>>
>> 10 bits or 1024 points. since you're doing linear interpolation, add one
>> more, copy the zeroth point x[0] to the last x[1024] so you don't have
>> to do any modulo (by ANDing with (1023-1) on the address of the second
>> point. (probably not necessary for hardware implementation.)
>>
>>
>> x[n] = sin( (pi/512)*n ) for 0 <= n <= 1024
>
> So you are suggesting a table with 2^n+1 entries? Not such a great idea
> in some apps, like hardware.

i think about hardware as a technology, not so much an app. i think
that apps can have either hardware or software realizations.

> What is the advantage?

in *software* when using a LUT *and* linear interpolation for sinusoid
generation, you are interpolating between x[n] and x[n+1] where n is
from the upper bits of the word that comes outa the phase accumulator:

#define PHASE_BITS 10
#define PHASE_MASK 0x001FFFFF // 2^21 - 1
#define FRAC_BITS 11
#define FRAC_MASK 0x000007FF // 2^11 - 1
#define ROUNDING_OFFSET 0x000400

long phase, phase_increment, int_part, frac_part;

long y, x[1025];

...

phase = phase + phase_increment;
phase = phase & PHASE_MASK;

int_part = phase >> FRAC_BITS;
frac_part = phase & FRAC_MASK;

y = x[int_part];
y = y + (((x[int_part+1]-y)*frac_part + ROUNDING_OFFSET)>>FRAC_BITS);


now if the lookup table is 1025 long with the last point a copy of the
first (that is x[1024] = x[0]), then you need not mask int_part+1.
x[int_part+1] always exists.


> Also, why 10 bit
> address for a 1024 element table?

uh, because 2^10 = 1024?

that came from your numbers: 8-bit address for a single quadrant and
there are 4 quadrants, 2 bits for the quadrant.


> My calculations indicate a linear
> interpolation can be done with 4 ppm accuracy with a 256 element LUT.
> I'm not completely finished my simulation, but I'm pretty confident this
> much is corrrect.

i'm agnostic about the specs for your numbers. you are saying that 8
bits per quadrant is enough and i am stipulating to that.

>>> Another 11 bits of phase will give me
>>> sufficient resolution to interpolate the sin() to 18 bits.
>>
>> so a 21 bit total index. your frequency resolution would be 2^(-21) in
>> cycles per sampling period or 2^(-21) * Fs. those 2 million values would
>> be the only frequencies you can meaningful
>
> No, that is the phase sent to the LUT. The total phase accumulator can
> be larger as the need requires.

why wouldn't you use those extra bits in the linear interpolation if
they are part of the phase word? not doing so makes no sense to me at
all. if you have more bits of precision in the fractional part of the phase

>>> If you assume a straight line between the two endpoints the midpoint of
>>> each interpolated segment will have an error of
>>> ((Sin(high)-sin(low))/2)-sin(mid)
>>>
>>
>> do you mean "+" instead of the first "-"? to be explicit:
>>
>> ( sin((pi/512)*(n+1)) + sin((pi/512)*n) )/2 - sin((pi/512)*(n+0.5))
>>
>> that's the error in the middle. dunno if it's the max error, but it's
>> might be.
>
> Yes, thanks for the correction. The max error? I'm not so worried about
> that exactly. The error is a curve with the max magnitude near the
> middle if nothing further is done to minimize it.
>
>
>>> Without considering rounding, this reaches a maximum at the last segment
>>> before the 90 degree point.
>>
>> at both the 90 and 270 degree points. (or just before and after those
>> points.)
>
> I'm talking about the LUT. The LUT only considers the first quadrant.

not in software. i realize that in hardware and you're worried about
real estate, you might want only one quadrant in ROM, but in software
you would have an entire cycle of the waveform (plus one repeated point
at the end for the linear interpolation).

and if you *don't* have the size constraint in your hardware target and
you can implement a 1024 point LUT as easily as a 256 point LUT, then
why not? so you don't have to fiddle with reflecting the single
quadrant around a couple of different ways.

>
>
>>> I calculate about 4-5 ppm which is about the
>>> same as the quantization of an 18 bit number.
>>>
>>> There are two issues I have not come across regarding these LUTs. One is
>>> adding a bias to the index before converting to the sin() value for the
>>> table. Some would say the index 0 represents the phase 0 and the index
>>> 2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT
>>> inefficient, especially in hardware. If a bias of half the lsb is added
>>> to the index before converting to a sin() value the value 0 to 2^n-1
>>> becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table
>>> properly. I assume this is commonly done and I just can't find a mention
>>> of it.
>>
>> do you mean biasing by 1/2 of a point? then your max error will be *at*
>> the 90 and 270 degree points and it will be slightly more than what you
>> had before.
>
> No, not quite right. There is a LUT with points spaced at 90/255 degrees
> apart starting at just above 0 degrees. The values between points in the
> table are interpolated with a maximum deviation near the center of the
> interpolation. Next to 90 degrees the interpolation is using the maximum
> interpolation factor which will result in a value as close as you can
> get to the correct value if the end points are used to construct the
> interpolation line. 90 degrees itself won't actually be represented, but
> rather points on either side, 90�delta where delta is 360� / 2^(n+1)
> with n being the number of bits in the input to the sin function.
>

i read this over a couple times and cannot grok it quite. need to be
more explicit (mathematically, or with C or pseudocode) with what your
operations are.

>
>>> The other issue is how to calculate the values for the table to give the
>>> best advantage to the linear interpolation. Rather than using the exact
>>> match to the end points stored in the table, an adjustment could be done
>>> to minimize the deviation over each interpolated segment. Without this,
>>> the errors are always in the same direction. With an adjustment the
>>> errors become bipolar and so will reduce the magnitude by half (approx).
>>> Is this commonly done? It will require a bit of computation to get these
>>> values, but even a rough approximation should improve the max error by a
>>> factor of two to around 2-3 ppm.
>>
>> if you assume an approximate quadratic behavior over that short segment,
>> you can compute the straight line where the error in the middle is equal
>> in magnitude (and opposite in sign) to the error at the end points.
>> that's a closed form solution, i think.
>
> Yes, it is a little tricky because at this point we are working with
> integer math (or technically fixed point I suppose).

not with defining the points of the table. you do that in MATLAB or in
C or something. your hardware gets its LUT spec from something you
create with a math program.

> Rounding errors is
> what this is all about. I've done some spreadsheet simulations and I
> have some pretty good results. I updated it a bit to generalize it to
> the LUT size and I keep getting the same max error counts (adjusted to
> work with integers rather than fractions) �3 no matter what the size of
> the interpolation factor. I don't expect this and I think I have
> something wrong in the calculations. I'll need to resolve this.
>
>
>> dunno if that is what you actually want for a sinusoidal waveform
>> generator. i might think you want to minimize the mean square error.
>
> We are talking about the lsbs of a 20+ bit word. Do you think there will
> be much of a difference in result?

i'm just thinking that if you were using LUT to generate a sinusoid that
is a *signal* (not some parameter that you use to calculate other
stuff), then i think you want to minimize the mean square error (which
is the power of the noise relative to the power of the sinusoid). so
the LUT values might not be exactly the same as the sine function
evaluated at those points.

> I need to actually be able to do the
> calculations and get this done rather than continue to work on the
> process. Also, each end point affects two lines,

what are "lines"? not quite following you.

> so there are tradeoffs,
> make one better and the other worse? It seems to get complicated very
> quickly.

if you want to define your LUT to minimize the mean square error,
assuming linear interpolation between LUT entries, that can be a little
complicated, but i don't think harder than what i remember doing in grad
school. want me to show you? you get a big system of linear equations
to get the points. if you want to minimize the maximum error (again
assuming linear interpolation), then it's that fitting a straight line
to that little snippet of quadratic curve bit that i mentioned.

>
>>> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
>>> advantage of this resolution! lol
>>>
>>> One thing I learned while doing this is that near 0 degrees the sin()
>>> function is linear (we all knew that, right?) but near 90 degrees, the
>>> sin() function is essentially quadratic. Who would have thunk it?
>>
>> Newton? Leibnitz? Gauss?
>>
>> sin(t + pi/2) = cos(t)
>
> How does that imply a quadratic curve at 90 degrees?

sin(t) = t + ... small terms when t is small

cos(t) = 1 - (t^2)/2 + ... small terms when t is small

> At least I think like the greats!

but they beat you by a few hundred years.

:-)

--

r b-j r...@audioimagination.com

"Imagination is more important than knowledge."


Eric Jacobsen

unread,
Mar 17, 2013, 10:53:31 PM3/17/13
to
If you are implementing in an FPGA with no multipliers, then CORDIC is
the first thing you should be looking at. Do a web search on
"CORDIC" for references. I know that's not at all obvious to you,
but it works to try. Or you could keep coming back here for more
spoonfeeding.

>
>--
>
>Rick

Eric Jacobsen
Anchor Hill Communications
http://www.anchorhill.com

Robert Baer

unread,
Mar 18, 2013, 1:44:22 AM3/18/13
to
Problem with that..could see only front and back cover.
Different source perhaps?

Robert Baer

unread,
Mar 18, 2013, 3:01:28 AM3/18/13
to
Once upon a time, about 30 years ago, i fiddled with routines to
multiply a millions-of-digits number by another millions-of-digits number.
Naturally, i used the FFT and convolution (and bit-reordering) to do
this.
As the number of digits grew, i needed to use a larger N (in 2^N
samples), so different algorithms were needed in different ranges to
maximize speed.
So i had to try a goodly number of published, standard, accepted,
used routines.
ALL algorithms for N larger than 5 worked but i was able to speed
them up anywhere from 10% to 30% by using "tricks" which were the
equivalent of "folding" around 22.5 degrees.
Start with a full 360 degree method.
"Fold" in half: sin(-x)=-sin(x) or 180 degrees.
Half again on the imaginary axis for what you are doing for 90
degrees: sin(90-x) = sin(90)*cos(x)-cos(90)*sin(x) = -cos(x) exact.
Half again: sin(45)=cos(45)=2^(3/2)~~.707 carry out as far as you
want; sin/cos to 45 degrees. sin(x=0..45) direct, sin(x=45..90)
=cos(45-x) accuracy depending on value of sin/cos(45).
That is prolly as far as you want to go with a FPGA, because (as you
indicated) you do not want to attempt multiplies in the FPGA.
That last step of "folding" might give better results or better FPGA
/ LUT calcs.

Calculating actual values for the LUT for maximum accuracy: the trig
series x-(x^3/3!)+(x^5/5!) etc should give better results from 0..90
than the approx on page 51.
Do not hesitate to try alternate schemes: sin(2x) = 2*sin(x)*cos(x)
and sin(0.5*x)=+/- sqrt((1-cos(x))/2) where that sign depends on the
quadrant of x.

glen herrmannsfeldt

unread,
Mar 18, 2013, 2:36:12 AM3/18/13
to
In comp.dsp Robert Baer <rober...@localnet.com> wrote:

(snip, I wrote)
I just tried it, copy and paste from the above link, and it worked.

You might be sure to use Adobe reader if that matters.

Also, it is page 273 in the document, page 282 in the PDF.

(snip)

Jasen Betts

unread,
Mar 18, 2013, 3:35:28 AM3/18/13
to
On 2013-03-18, Eric Jacobsen <eric.j...@ieee.org> wrote:
> On Sun, 17 Mar 2013 18:35:37 -0400, rickman <gnu...@gmail.com> wrote:

> If you are implementing in an FPGA with no multipliers, then CORDIC is
> the first thing you should be looking at.

?? CORDIC requires even more multiplies

--
⚂⚃ 100% natural

Jasen Betts

unread,
Mar 18, 2013, 5:06:13 AM3/18/13
to
On 2013-03-18, Eric Jacobsen <eric.j...@ieee.org> wrote:
> On Sun, 17 Mar 2013 18:35:37 -0400, rickman <gnu...@gmail.com> wrote:

> If you are implementing in an FPGA with no multipliers, then CORDIC is
> the first thing you should be looking at.

?? CORDIC requires even more multiplies


But you can muliply two short lookup tables to simulate a longer lookup
table using he identity:

sin(a+b) = sin(a)*cos(b) + cos(a)*sin(b)

so you do a coarse lookup table with sin(a) (and you can get cos(a) from it
ti if a is some integer factor of 90 degrees)

and do two fine lookup tables for sin(b) and cos(b)

eg: you can do 16 bits of angle at 16 bit resoluton in 512 bytes of
table and ~80 lines of assembler code
(but the computation needs two multiplies)

the first two bits determine the quadrant.

the next 7 bits are on a coarse lookup table
128 entries x 16 bits is 256 bytes

next bit defines the sign of b (if set b is negative,add one to a)

and the final 6 are the rest of b

from a 64 entry table you can get sin(|b|)
and another 64 for cos(|b|)

treat b=0 as a special case - have entries for 1..64 in the table
correct sin(|b|) for negative b

a few branches and bit-shifts, two multiplies, one add and you have
your value.

--
⚂⚃ 100% natural

Spehro Pefhany

unread,
Mar 18, 2013, 10:00:46 AM3/18/13
to
CORDIC requires shift and add. That's why it was used in early
scientific calculators- very little hardware was required.

Phil Hobbs

unread,
Mar 18, 2013, 10:12:45 AM3/18/13
to
I have an HP 339A you can borrow. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

John Larkin

unread,
Mar 18, 2013, 10:26:45 AM3/18/13
to
On Mon, 18 Mar 2013 10:12:45 -0400, Phil Hobbs
That's for audio. Who cares about distortion for audio? And it gets nowhere near
PPM turf.

Uwe Hercksen

unread,
Mar 18, 2013, 10:53:55 AM3/18/13
to


rickman schrieb:

> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function. The two msbs of the phase define the
> quadrant. I have decided that an 8 bit address for a single quadrant is
> sufficient with an 18 bit output. Another 11 bits of phase will give me
> sufficient resolution to interpolate the sin() to 18 bits.
>
> If you assume a straight line between the two endpoints the midpoint of
> each interpolated segment will have an error of
> ((Sin(high)-sin(low))/2)-sin(mid)

Hello,

you may use the well known formula:
sin(a+b) = sin(a)cos(b) + cos(a)sin(b)

It looks like you will need four tables, one coarse sin(), one coarse
cos(), one fine sin() and one fine cos() table, but you only need the
coarse sin() for one quadrant and the fine sin() and cos() tables.
The coarse sin() table for one quadrant will also deliver the values for
cos()

Bye

Rob Gaddi

unread,
Mar 18, 2013, 12:21:47 PM3/18/13
to
On Sat, 16 Mar 2013 18:30:51 -0400
rickman <gnu...@gmail.com> wrote:

> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function. The two msbs of the phase define the
> quadrant. I have decided that an 8 bit address for a single quadrant is
> sufficient with an 18 bit output. Another 11 bits of phase will give me
> sufficient resolution to interpolate the sin() to 18 bits.
>
> If you assume a straight line between the two endpoints the midpoint of
> each interpolated segment will have an error of
> ((Sin(high)-sin(low))/2)-sin(mid)
>
> --
>
> Rick

Just one note, since you're doing this in an FPGA. If your look up
table is a dual-port block RAM, then you can look up the the slope
simultaneously rather than calculate it. sin(x)/dx = cos(x) = sin(x +
90 deg).

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.

Michael A. Terrell

unread,
Mar 18, 2013, 1:58:28 PM3/18/13
to

Robert Baer wrote:
> Problem with that..could see only front and back cover.
> Different source perhaps?


It works for me. Downloaded & opened with Adobe Acrobat X Ver.10.1.6


--
Politicians should only get paid if the budget is balanced, and there is
enough left over to pay them.

Jerry Avins

unread,
Mar 18, 2013, 2:03:31 PM3/18/13
to
On 3/17/2013 8:03 PM, Tim Williams wrote:
> "Jon Kirwan" <jo...@infinitefactors.org> wrote in message
> news:r2ick8d61stgkekae...@4ax.com...
>> You already know why. Just not wearing your hat right now.
>> Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at
>> sin(x) is 1 + x. One is quadratic, one is linear.
>
> Oops, should be:
> cos(x) ~= 1 - x^2/2
> sin(x) ~= x (- x^3/6)
> Mixed up your odd/even powers :)
>
> Of course, the cubic is less important than the quadratic, so sin(x) ~=
> x is sufficient for most purposes.
>
> I suppose you could potentially synthesize the functions using low power
> approximations (like these) and interpolating between them over modulo
> pi/4 segments. Probably wouldn't save many clock cycles relative to a
> CORDIC, higher order polynomial, or other, for a given accuracy, so the
> standard space-time tradeoff (LUT vs. computation) isn't changed.

An in-between approach is quadratic interpolation. Rickman reads Forth.
http://users.erols.com/jyavins/typek.htm may inspire him. (It's all
fixed-point math.) ((As long as there is interpolation code anyway, an
octant is sufficient.))

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

Jerry Avins

unread,
Mar 18, 2013, 2:10:54 PM3/18/13
to
http://www.andraka.com/files/crdcsrvy.pdf should be a good start. Do you
remember Ray Andraka?
�����������������������������������������������������������������������

Eric Jacobsen

unread,
Mar 18, 2013, 3:54:11 PM3/18/13
to
On 18 Mar 2013 09:06:13 GMT, Jasen Betts <ja...@xnet.co.nz> wrote:

>On 2013-03-18, Eric Jacobsen <eric.j...@ieee.org> wrote:
>> On Sun, 17 Mar 2013 18:35:37 -0400, rickman <gnu...@gmail.com> wrote:
>
>> If you are implementing in an FPGA with no multipliers, then CORDIC is
>> the first thing you should be looking at.
>
>?? CORDIC requires even more multiplies
>
>
>But you can muliply two short lookup tables to simulate a longer lookup
>table using he identity:
>
> sin(a+b) = sin(a)*cos(b) + cos(a)*sin(b)
>
> so you do a coarse lookup table with sin(a) (and you can get cos(a) from it
>ti if a is some integer factor of 90 degrees)
>
> and do two fine lookup tables for sin(b) and cos(b)
>
> eg: you can do 16 bits of angle at 16 bit resoluton in 512 bytes of
> table and ~80 lines of assembler code
> (but the computation needs two multiplies)

He indicated that his target is an FPGA that has no multiplies. This
is the exact environment where the CORDIC excels. LUTs can be
expensive in FPGAs depending on the vendor and the device.

Almost any DSP process, including a large DFT, can be done in a LUT,
it's just that nobody wants to build memories that big. It's all
about the tradeoffs. FPGA with no multipliers -> CORDIC, unless there
are big memories available in which case additional tradeoffs have to
be considered. Most FPGAs that don't have multipliers also don't
have a lot of memory for LUTS.


> the first two bits determine the quadrant.
>
> the next 7 bits are on a coarse lookup table
> 128 entries x 16 bits is 256 bytes
>
> next bit defines the sign of b (if set b is negative,add one to a)
>
> and the final 6 are the rest of b
>
> from a 64 entry table you can get sin(|b|)
> and another 64 for cos(|b|)
>
> treat b=0 as a special case - have entries for 1..64 in the table
> correct sin(|b|) for negative b
>
> a few branches and bit-shifts, two multiplies, one add and you have
> your value.

There are lots of ways to do folded-wave implementations for sine
generation, e.g., DDS implementations, etc. Again, managing the
tradeoffs with the available resources usually drives how to do get it
done. Using a quarter-wave LUT with an accumulator has been around
forever, so it's a pretty well-studied problem.

>
>--
>⚂⚃ 100% natural

rickman

unread,
Mar 18, 2013, 4:21:28 PM3/18/13
to
Mostly because the table is already optimal, unless you want to correct
for other components.


>>>> I'm assuming there would be no practical closed form solution. But
>>>> couldn't the THD be calculated in a simulation rather than having to be
>>>> measured on the bench?
>>>
>>> Our experience is that the output amplifiers, after the DAC and lowpass filter,
>>> are the nasty guys for distortion, at least in the 10s of MHz. Lots of
>>> commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table
>>> correction would unfortunately be amplitude dependent, so it gets messy fast.
>>
>> Ok, if you are compensating for the rest of the hardware, that's a
>> different matter...
>
> For everything, including the DAC, but especially downstream stuff. The amps and
> stuff may have 100x more distortion than the lookup table does.

Ok, then you can add that to the LUT. As you say, it will require a
full 360 degree table. This will have a limit on the order of the
harmonics you can apply, but I expect the higher order harmonics would
have very, very small coefficients anyway.

--

Rick

rickman

unread,
Mar 18, 2013, 4:23:29 PM3/18/13
to
On 3/17/2013 7:31 PM, Phil Hobbs wrote:
> On 3/17/2013 6:44 PM, rickman wrote:
>> On 3/17/2013 6:20 PM, John Larkin wrote:
>>>
>>> It would be interesting to distort the table entries to minimize THD.
>>> The
>>> problem there is how to measure very low THD to close the loop.
>>
>> It could also be a difficult problem to solve without some sort of
>> exhaustive search. Each point you fudge affects two curve segments and
>> each curve segment is affected by two points. So there is some degree
>> of interaction.
>>
>> Anyone know if there is a method to solve such a problem easily? I am
>> sure that my current approach gets each segment end point to within �1
>> lsb and I suppose once you measure THD in some way it would not be an
>> excessive amount of work to tune the 256 end points in a few passes
>> through. This sounds like a tree search with pruning.
>>
>> I'm assuming there would be no practical closed form solution. But
>> couldn't the THD be calculated in a simulation rather than having to be
>> measured on the bench?
>>
>
> The simple method would be to compute a zillion-point FFT. Since the
> function is really truly periodic, the FFT gives you the right answer
> for the continuous-time Fourier transform, provided you have enough
> points that you can see all the relevant harmonics.

Ok, as a mathematician would say, "There exists a solution!"

--

Rick

rickman

unread,
Mar 18, 2013, 5:36:03 PM3/18/13
to
Almost exact, sin(90-x) = cos(x), not -cos(x). More important is
sin(180-x) = sin(x) from the same equation above. This is the folding
around 90 degrees. Or sin(90+x) = cos(x), but I don't have a cos
table... lol

This is the two levels of folding that are "free" or at least very
inexpensive. It requires controlled inversion on the bits of x and a
controlled 2's complement on the LUT output.


> Half again: sin(45)=cos(45)=2^(3/2)~~.707 carry out as far as you want;
> sin/cos to 45 degrees. sin(x=0..45) direct, sin(x=45..90) =cos(45-x)
> accuracy depending on value of sin/cos(45).

You really lost me on this one. How do I get cos(45-x)? If using this
requires a multiply by sqrt(2), that isn't a good thing. I don't have a
problem with a 512 element table. The part I am using can provide up to
36 bit words, so I should be good to go with the full 90 degree table.

Separating x into two parts, 45 and x', where x' ranges from 0 to 45,
the notation should be, sin(45+x') = sin(45)cos(x') + cos(45)sin(x') =
sin(45) * (cos(x') + sin(x')). I'm not sure that helps on two levels.
I don't have cos(x') and I don't want to do multiplies unless I have to.


> That is prolly as far as you want to go with a FPGA, because (as you
> indicated) you do not want to attempt multiplies in the FPGA.
> That last step of "folding" might give better results or better FPGA /
> LUT calcs.
>
> Calculating actual values for the LUT for maximum accuracy: the trig
> series x-(x^3/3!)+(x^5/5!) etc should give better results from 0..90
> than the approx on page 51.
> Do not hesitate to try alternate schemes: sin(2x) = 2*sin(x)*cos(x) and
> sin(0.5*x)=+/- sqrt((1-cos(x))/2) where that sign depends on the
> quadrant of x.

I don't have resources to do a lot of math. A linear interpolation will
work ok primarily because it only uses one multiply of limited length.
I can share this entire circuit between the two channels because the
CODEC interface is time multiplexed anyway.

I did some more work on the interpolation last night and I put to rest
my last nagging concerns. I am confident I can get max errors of about
�2 lsbs using a 512 element LUT and 10 bit interpolation. This is not
just the rounding errors in the calculations but also the error due to
the input resolution which is 21 bits. The errors without the input
resolution are limited to about 1.5 lsbs. I expect the RMS error is
very small indeed as most point errors will be much smaller than the max.

--

Rick

lang...@fonz.dk

unread,
Mar 18, 2013, 5:38:53 PM3/18/13
to
On Mar 18, 7:10 pm, Jerry Avins <j...@ieee.org> wrote:
> On 3/17/2013 6:35 PM, rickman wrote:
>
>
>
>
>
>
>
>
>
> > On 3/17/2013 6:23 PM, langw...@fonz.dk wrote:
> >> On Mar 17, 10:56 pm, rickman<gnu...@gmail.com>  wrote:
> >>> On 3/16/2013 8:02 PM, langw...@fonz.dk wrote:
>
> >>>> why  not skip the lut and just do a full sine approximation
>
> >>>> something like this, page 53-54
>
> >>>>http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/AD...
>
> >>> I'm not sure this would be easier.  The LUT an interpolation require a
> >>> reasonable amount of logic.  This requires raising X to the 5th power
> >>> and five constant multiplies.  My FPGA doesn't have multipliers and this
> >>> may be too much for the poor little chip.  I suppose I could do some of
> >>> this with a LUT and linear interpolation... lol
>
> >> I see, I assumed you had an MCU (or fpga) with multipliers
>
> >> considered CORDIC?
>
> > I should learn that.  I know it has been used to good effect in FPGAs.
> > Right now I am happy with the LUT, but I would  like to learn more about
> > CORDIC.  Any useful referrences?
>
> http://www.andraka.com/files/crdcsrvy.pdfshould be a good start. Do you
> remember Ray Andraka?
>

is he still around? I just checked his page, last update was 2008

a quick google finds something that might give a quick view of how it
works
https://github.com/the0b/MSP430-CORDIC-sine-cosine/blob/master/cordic.c

-Lasse


rickman

unread,
Mar 18, 2013, 5:47:41 PM3/18/13
to
It is a lot of document for just two pages. Try this link to see the
short version...

arius.com/foobar/1972_National_MOS_Integrated_Circuits_Pages_273-274.pdf

--

Rick

Jamie

unread,
Mar 18, 2013, 7:01:38 PM3/18/13
to
isn't it "fubar" ?

Jamie

rickman

unread,
Mar 18, 2013, 5:58:41 PM3/18/13
to
On 3/17/2013 8:51 PM, glen herrmannsfeldt wrote:
> In comp.dsp rickman<gnu...@gmail.com> wrote:
> (snip)
>>>> I've been studying an approach to implementing a lookup table (LUT) to
>>>> implement a sine function.
>
>>> (snip)
>
>>>> If you assume a straight line between the two endpoints the midpoint of
>>>> each interpolated segment will have an error of
>>>> ((Sin(high)-sin(low))/2)-sin(mid)
>
>>> Seems what they instead do is implement
>
>>> sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less
>>> significant bits of theta. Also, cos(L) tends to be almost 1,
>>> so they just say it is 1.
>
>> Interesting. This tickles a few grey cells. Two values based solely on
>> the MS portion of x and a value based solely on the LS portion. Two
>> tables, three lookups a multiply and an add. That could work. The
>> value for sin(M) would need to be full precision, but I assume the
>> values of sin(L) could have less range because sin(L) will always be a
>> small value...
>
> The cos(M)sin(L) is one table, with some bits from each.
>
> Well, when you do the interpolation, you first need to know the
> spacing between points in the sine table. That spacing is
> proportional to cos(x). So, the interpolation table is
> indexed by the high four bits of M and the four bits of L.

Yes, but the error is much larger than the errors I am working with. I
want an 18 bit output from the process... at least a 16 bit accurate
since the DAC is only ~90 dB SINAD. I expect to get something that is
about 17-17.5 ENOB going into the DAC.

I don't think the method described in the paper is so good when you want
accuracies this good. The second LUT gets rather large. But then it
doesn't use a multiplier does it?


>>> Well, you can fix it in various ways, but you definitely want 2^n.
>
> (snip)
>> I think one of the other posters was saying to add an entry for 90
>> degrees. I don't like that. I could be done but it complicates the
>> process of using a table for 0-90 degrees.
>
> I am not sure yet how they do that. I did notice that when adding the
> interpolation value they propagate the carry, so the output can go to
> the full 1.0000 instead of 0.99999. But it will do that before 90
> degrees.

They just meant to use a 2^n+1 long table rather than just 2^n length.
No real problem if you are working in software with lots of memory I
suppose.


>>> Seems like that is one of the suggestions, but not done in the ROMs
>>> they were selling. Then the interpolation has to add or subtract,
>>> which is slightly (in TTL) harder.
>
>>> The interpolated sine was done in 1970 with 128x8 ROMs. With larger
>>> ROMs, like usual today, you shouldn't need it unless you want really
>>> high resolution.
>
>> I'm not using an actual ROM chip. This is block ram in a rather small
>> FPGA with only six blocks. I need two channels and may want to use some
>> of the blocks for other functions.
>
> So, the one referenced uses three 256x4 ROMs to get 12 bits of sin(M)
> from 8 bits of M, and then a fourth 128x8 ROM to get five bits to
> add to the low bits of the 12 bit sin(M).

Yep. About a factor of 35db worse than my design I expect.

One thing I realized about the descriptions most of these designs give
is they only analyze the noise from the table inaccuracies *at the point
of the sample*. In other words, they don't consider the noise from the
input resolution. Even if you get perfect reconstruction of the sine
values, such as a pure LUT, you still have the noise of the resolution
of the input to the sine generation. Just my 2 cents worth...

--

Rick

k...@attt.bizz

unread,
Mar 18, 2013, 6:09:27 PM3/18/13
to
On Mon, 18 Mar 2013 14:38:53 -0700 (PDT), "lang...@fonz.dk"
<lang...@fonz.dk> wrote:

>On Mar 18, 7:10�pm, Jerry Avins <j...@ieee.org> wrote:
>> On 3/17/2013 6:35 PM, rickman wrote:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> > On 3/17/2013 6:23 PM, langw...@fonz.dk wrote:
>> >> On Mar 17, 10:56 pm, rickman<gnu...@gmail.com> �wrote:
>> >>> On 3/16/2013 8:02 PM, langw...@fonz.dk wrote:
>>
>> >>>> why �not skip the lut and just do a full sine approximation
>>
>> >>>> something like this, page 53-54
>>
>> >>>>http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/AD...
>>
>> >>> I'm not sure this would be easier. �The LUT an interpolation require a
>> >>> reasonable amount of logic. �This requires raising X to the 5th power
>> >>> and five constant multiplies. �My FPGA doesn't have multipliers and this
>> >>> may be too much for the poor little chip. �I suppose I could do some of
>> >>> this with a LUT and linear interpolation... lol
>>
>> >> I see, I assumed you had an MCU (or fpga) with multipliers
>>
>> >> considered CORDIC?
>>
>> > I should learn that. �I know it has been used to good effect in FPGAs.
>> > Right now I am happy with the LUT, but I would �like to learn more about
>> > CORDIC. �Any useful referrences?
>>
>> http://www.andraka.com/files/crdcsrvy.pdfshould be a good start. Do you
>> remember Ray Andraka?
>>
>
>is he still around? I just checked his page, last update was 2008

His LinkedIn profile was updated in October of last year.

rickman

unread,
Mar 18, 2013, 6:12:15 PM3/18/13
to
That is an interesting point, but this is actually harder to calculate
with than the difference. First, cos(x) is sin(90-x). Second, where we
are looking for the sin(M+L) (M is msbs and L is lsbs) the linear
interpolation becomes,

delta = sin(M+1)-sin(M) * L

This is a simple multiply.

To do this using the cos function it becomes,

delta = sin(1/Mmax) * cos(M) * L

sin(1/Mmax) is a constant to set the proper scaling factor. With two
variables and a constant, this requires two multiplies rather than one
multiply and one subtraction. As it turns out my LUT has lots of bits
out so I may just store the difference eliminating one step. It depends
on how I end up designing the actual circuit. The ram can either be a
true dual port with 18 bits or a single port with 36 bits. So if I need
the dual port, then I might not want to store the delta values. Either
way, this is no biggie.

--

Rick

rickman

unread,
Mar 18, 2013, 6:39:31 PM3/18/13
to
On 3/18/2013 7:01 PM, Jamie wrote:
> rickman wrote:
>>
>> It is a lot of document for just two pages. Try this link to see the
>> short version...
>>
>> arius.com/foobar/1972_National_MOS_Integrated_Circuits_Pages_273-274.pdf
>>
> isn't it "fubar" ?
>
> Jamie

This is the French spelling. Make sure you put the accent on the second
syLAble.

--

Rick

rickman

unread,
Mar 18, 2013, 6:57:13 PM3/18/13
to
On 3/18/2013 2:10 PM, Jerry Avins wrote:
> On 3/17/2013 6:35 PM, rickman wrote:
>> On 3/17/2013 6:23 PM, lang...@fonz.dk wrote:
>>> On Mar 17, 10:56 pm, rickman<gnu...@gmail.com> wrote:
>>>> On 3/16/2013 8:02 PM, langw...@fonz.dk wrote:
>>>>
>>>>> why not skip the lut and just do a full sine approximation
>>>>
>>>>> something like this, page 53-54
>>>>
>>>>> http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/AD...
>>>>>
>>>>>
>>>>
>>>> I'm not sure this would be easier. The LUT an interpolation require a
>>>> reasonable amount of logic. This requires raising X to the 5th power
>>>> and five constant multiplies. My FPGA doesn't have multipliers and this
>>>> may be too much for the poor little chip. I suppose I could do some of
>>>> this with a LUT and linear interpolation... lol
>>>>
>>>
>>> I see, I assumed you had an MCU (or fpga) with multipliers
>>>
>>> considered CORDIC?
>>
>> I should learn that. I know it has been used to good effect in FPGAs.
>> Right now I am happy with the LUT, but I would like to learn more about
>> CORDIC. Any useful referrences?
>
> http://www.andraka.com/files/crdcsrvy.pdf should be a good start. Do you
> remember Ray Andraka?

Yes, of course. I should have thought of that. I don't go to his site
often, but his stuff is always excellent. I may have even downloaded
this paper before. I took a look at the CORDIC once a few years ago and
didn't get too far. I'm happy with my current approach for the moment,
but I will dig into this more another time.

I appreciate the link. Thanks Jerry.

--

Rick

rickman

unread,
Mar 18, 2013, 7:01:17 PM3/18/13
to
On 3/17/2013 6:02 PM, glen herrmannsfeldt wrote:
> In comp.dsp rickman<gnu...@gmail.com> wrote:
>> I've been studying an approach to implementing a lookup table (LUT) to
>> implement a sine function.
>
> (snip)
>
>> If you assume a straight line between the two endpoints the midpoint of
>> each interpolated segment will have an error of
>> ((Sin(high)-sin(low))/2)-sin(mid)
>
> Seems what they instead do is implement
>
> sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less
> significant bits of theta. Also, cos(L) tends to be almost 1,
> so they just say it is 1.

I did a little work with this tonight and guess what, this is nearly the
same as the linear interpolation. cos(M) is as you say, nearly one. So
close in fact, that using the 10 msbs for M and an 18 bit output word,
there value calculated for the 1023rd entry is 0x40000.

That leaves sin(L). Surprise! This turns out to be a straight line! So
the formula is FAPP the same as the linear interpolation with the added
fudge factor of multiplying sin(L) by cos(M) rather than just
multiplying L directly with the difference sin(M+1)-sin(M). The sin*cos
method does require a second table to be stored. In fact, the paper you
reference uses a bit of slight of hand to include both the cos(M) *
sin(L) at a lesser precision both in the input and output of this table.
I'm not sure how well this would work with higher precision needs, but
I suppose it is worth a bit further look. It would be nice to eliminate
the multiplier which is used in the linear interpolation.

So I think this is just different strokes for different folks in the
end. The sin*cos method might work out a little better with the RMS
noise depending on how the second table is handled. But I think it may
end up... well, in the noise anyway.

I think the real problem is that as the resolution of the input
increases, the second table gets harder to keep small. But their trick
of essentially tossing the middle bits seems to work fairly well.


>> The other issue is how to calculate the values for the table to give the
>> best advantage to the linear interpolation. Rather than using the exact
>> match to the end points stored in the table, an adjustment could be done
>> to minimize the deviation over each interpolated segment. Without this,
>> the errors are always in the same direction. With an adjustment the
>> errors become bipolar and so will reduce the magnitude by half (approx).
>> Is this commonly done? It will require a bit of computation to get
>> these values, but even a rough approximation should improve the max
>> error by a factor of two to around 2-3 ppm.
>
> Seems like that is one of the suggestions, but not done in the ROMs
> they were selling. Then the interpolation has to add or subtract,
> which is slightly (in TTL) harder.

Yes, I see this note. This seems to be exactly what I was thinking. But
I don't think the data has to be subtracted. The unadjusted error is
zero at the end points of each interpolated line and consistently
increases in toward the middle from each end point. Adjusting the data
in the table results in an error that instead goes positive and
negative, but the table data is still always added, although they don't
deal with the issues of implementing more than 90 degrees. If you go
around the clock you then need to consider sign bits (as opposed to sine
bits) on the inputs to the tables.


> The interpolated sine was done in 1970 with 128x8 ROMs. With larger
> ROMs, like usual today, you shouldn't need it unless you want really
> high resolution.

I am working in a small FPGA with rather limited block RAM capabilities.
It is already on the board, so I can't swap it out. I also need two
generator circuits which pushes the limits a bit more. At least I don't
have any shortage of adder/subtractors or all the other logic this will
need.

Thanks for the pointer and the advice.

--

Rick

Phil Hobbs

unread,
Mar 18, 2013, 7:17:53 PM3/18/13
to
You're no fun anymore. :-)

It does have a 0.01% FS range, so you'd expect to get a lot better than
-80 dB.

But for fast stuff, I'd probably do the same as you--filter the
daylights out of it and amplify the residuals.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510 USA
+1 845 480 2058

glen herrmannsfeldt

unread,
Mar 18, 2013, 7:18:20 PM3/18/13
to
In comp.dsp rickman <gnu...@gmail.com> wrote:
> On 3/18/2013 2:10 PM, Jerry Avins wrote:

(snip, ending on CORDIC)

>> http://www.andraka.com/files/crdcsrvy.pdf should be a good start. Do you
>> remember Ray Andraka?

> Yes, of course. I should have thought of that. I don't go to his site
> often, but his stuff is always excellent. I may have even downloaded
> this paper before. I took a look at the CORDIC once a few years ago and
> didn't get too far. I'm happy with my current approach for the moment,
> but I will dig into this more another time.

I once thought I pretty well understood binary CORDIC, but it
is also used in decimal on many pocket calculators. I never got
far enough to figure out how they did that.

> I appreciate the link. Thanks Jerry.

-- glen

glen herrmannsfeldt

unread,
Mar 18, 2013, 7:48:15 PM3/18/13
to
In comp.dsp rickman <gnu...@gmail.com> wrote:

(snip, I wrote)
>> Seems what they instead do is implement

>> sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less
>> significant bits of theta. Also, cos(L) tends to be almost 1,
>> so they just say it is 1.

> I did a little work with this tonight and guess what, this is nearly the
> same as the linear interpolation. cos(M) is as you say, nearly one. So
> close in fact, that using the 10 msbs for M and an 18 bit output word,
> there value calculated for the 1023rd entry is 0x40000.

> That leaves sin(L). Surprise! This turns out to be a straight line! So
> the formula is FAPP the same as the linear interpolation with the added
> fudge factor of multiplying sin(L) by cos(M) rather than just
> multiplying L directly with the difference sin(M+1)-sin(M). The sin*cos
> method does require a second table to be stored. In fact, the paper you
> reference uses a bit of slight of hand to include both the cos(M) *
> sin(L) at a lesser precision both in the input and output of this table.
> I'm not sure how well this would work with higher precision needs, but
> I suppose it is worth a bit further look. It would be nice to eliminate
> the multiplier which is used in the linear interpolation.

It has been a while since I used a table with actual interpoltation, but
as I remember it they give you the spacing somewhere on the table.
Now, you could have one table to look up the spacing and another to
calculate the interpolation value (that is, a multiply table).

So, they divide the problem up into 16 different slopes, but the
slopes are equally distributed in theta.

> So I think this is just different strokes for different folks in the
> end. The sin*cos method might work out a little better with the RMS
> noise depending on how the second table is handled. But I think it may
> end up... well, in the noise anyway.

Well, it is also convenient for the size of ROMs they had available.
(Or the other way around. A convenient use for the size of ROMs
they had to sell.) The tradeoffs might be different for more,
smaller ROMs. I believe the manual I had (and didn't find) has the
actual ROM tables listed. At least I thought it did.

It could also be pipelined, especially with more levels of ROM.

> I think the real problem is that as the resolution of the input
> increases, the second table gets harder to keep small. But their trick
> of essentially tossing the middle bits seems to work fairly well.

(snip, I wrote)
>> Seems like that is one of the suggestions, but not done in the ROMs
>> they were selling. Then the interpolation has to add or subtract,
>> which is slightly (in TTL) harder.

> Yes, I see this note. This seems to be exactly what I was thinking. But
> I don't think the data has to be subtracted. The unadjusted error is
> zero at the end points of each interpolated line and consistently
> increases in toward the middle from each end point. Adjusting the data
> in the table results in an error that instead goes positive and
> negative, but the table data is still always added, although they don't
> deal with the issues of implementing more than 90 degrees. If you go
> around the clock you then need to consider sign bits (as opposed to sine
> bits) on the inputs to the tables.

Oh, I think I see what you mean. But it might have bigger slope
discontinuities that way.

>> The interpolated sine was done in 1970 with 128x8 ROMs. With larger
>> ROMs, like usual today, you shouldn't need it unless you want really
>> high resolution.

> I am working in a small FPGA with rather limited block RAM capabilities.
> It is already on the board, so I can't swap it out. I also need two
> generator circuits which pushes the limits a bit more. At least I don't
> have any shortage of adder/subtractors or all the other logic this will
> need.

If you have more, smaller ROMs I think the idea can be repeated.
That is, have High, Mid, and Low bits, and two levels of interpolation.

> Thanks for the pointer and the advice.

-- glen

rickman

unread,
Mar 18, 2013, 7:53:17 PM3/18/13
to
On 3/17/2013 10:14 PM, robert bristow-johnson wrote:
> On 3/17/13 5:49 PM, rickman wrote:
>> On 3/16/2013 7:13 PM, robert bristow-johnson wrote:
>>>
>>> this *should* be a relatively simple issue, but i am confused
>>>
>>> On 3/16/13 6:30 PM, rickman wrote:
>>>> I've been studying an approach to implementing a lookup table (LUT) to
>>>> implement a sine function. The two msbs of the phase define the
>>>> quadrant. I have decided that an 8 bit address for a single quadrant is
>>>> sufficient with an 18 bit output.
>>>
>>> 10 bits or 1024 points. since you're doing linear interpolation, add one
>>> more, copy the zeroth point x[0] to the last x[1024] so you don't have
>>> to do any modulo (by ANDing with (1023-1) on the address of the second
>>> point. (probably not necessary for hardware implementation.)
>>>
>>>
>>> x[n] = sin( (pi/512)*n ) for 0 <= n <= 1024
>>
>> So you are suggesting a table with 2^n+1 entries? Not such a great idea
>> in some apps, like hardware.
>
> i think about hardware as a technology, not so much an app. i think that
> apps can have either hardware or software realizations.

Ok, semantics. What word should I use instead of "app"?


>> What is the advantage?
>
> in *software* when using a LUT *and* linear interpolation for sinusoid
> generation, you are interpolating between x[n] and x[n+1] where n is
> from the upper bits of the word that comes outa the phase accumulator:
>
> #define PHASE_BITS 10
> #define PHASE_MASK 0x001FFFFF // 2^21 - 1
> #define FRAC_BITS 11
> #define FRAC_MASK 0x000007FF // 2^11 - 1
> #define ROUNDING_OFFSET 0x000400
>
> long phase, phase_increment, int_part, frac_part;
>
> long y, x[1025];
>
> ...
>
> phase = phase + phase_increment;
> phase = phase & PHASE_MASK;
>
> int_part = phase >> FRAC_BITS;
> frac_part = phase & FRAC_MASK;
>
> y = x[int_part];
> y = y + (((x[int_part+1]-y)*frac_part + ROUNDING_OFFSET)>>FRAC_BITS);
>
>
> now if the lookup table is 1025 long with the last point a copy of the
> first (that is x[1024] = x[0]), then you need not mask int_part+1.
> x[int_part+1] always exists.
>
>
>> Also, why 10 bit
>> address for a 1024 element table?
>
> uh, because 2^10 = 1024?
>
> that came from your numbers: 8-bit address for a single quadrant and
> there are 4 quadrants, 2 bits for the quadrant.

No, that didn't come from my numbers, well, not exactly. You took my
numbers and turned it into a full 360 degree table. I was asking where
you got 10 bits, not why a 1024 table needs 10 bits. Actually, in your
case it is a 1025 table needing 11 bits.


>>> so a 21 bit total index. your frequency resolution would be 2^(-21) in
>>> cycles per sampling period or 2^(-21) * Fs. those 2 million values would
>>> be the only frequencies you can meaningful
>>
>> No, that is the phase sent to the LUT. The total phase accumulator can
>> be larger as the need requires.
>
> why wouldn't you use those extra bits in the linear interpolation if
> they are part of the phase word? not doing so makes no sense to me at
> all. if you have more bits of precision in the fractional part of the phase

Because that is extra logic and extra work. Not only that, it is beyond
the capabilities of the DAC I am using. As it is, the linear interp
will generate bits that extend below what is implied by the resolution
of the inputs. I may use them or I may lop them off.


>>>> Without considering rounding, this reaches a maximum at the last
>>>> segment
>>>> before the 90 degree point.
>>>
>>> at both the 90 and 270 degree points. (or just before and after those
>>> points.)
>>
>> I'm talking about the LUT. The LUT only considers the first quadrant.
>
> not in software. i realize that in hardware and you're worried about
> real estate, you might want only one quadrant in ROM, but in software
> you would have an entire cycle of the waveform (plus one repeated point
> at the end for the linear interpolation).
>
> and if you *don't* have the size constraint in your hardware target and
> you can implement a 1024 point LUT as easily as a 256 point LUT, then
> why not? so you don't have to fiddle with reflecting the single quadrant
> around a couple of different ways.

OK, thanks for the insight.


>>> do you mean biasing by 1/2 of a point? then your max error will be *at*
>>> the 90 and 270 degree points and it will be slightly more than what you
>>> had before.
>>
>> No, not quite right. There is a LUT with points spaced at 90/255 degrees
>> apart starting at just above 0 degrees. The values between points in the
>> table are interpolated with a maximum deviation near the center of the
>> interpolation. Next to 90 degrees the interpolation is using the maximum
>> interpolation factor which will result in a value as close as you can
>> get to the correct value if the end points are used to construct the
>> interpolation line. 90 degrees itself won't actually be represented, but
>> rather points on either side, 90�delta where delta is 360� / 2^(n+1)
>> with n being the number of bits in the input to the sin function.
>>
>
> i read this over a couple times and cannot grok it quite. need to be
> more explicit (mathematically, or with C or pseudocode) with what your
> operations are.

It is simple, the LUT gives values for each point in the table, precise
to the resolution of the output. The sin function between these points
varies from close to linear to close to quadratic. If you just
interpolate between the "exact" points, you have an error that is
*always* negative anywhere except the end points of the segments. So it
is better to bias the end points to center the linear approximation
somewhere in the middle of the real curve. Of course, this has to be
tempered by the resolution of your LUT output.

If you really want to see my work it is all in a spreadsheet at the
moment. I can send it to you or I can copy some of the formulas here.
My "optimization" is not nearly optimal. But it is an improvement over
doing nothing I believe and is not so hard. I may drop it when I go to
VHDL as it is iterative and may be a PITA to code in a function.


>>> if you assume an approximate quadratic behavior over that short segment,
>>> you can compute the straight line where the error in the middle is equal
>>> in magnitude (and opposite in sign) to the error at the end points.
>>> that's a closed form solution, i think.
>>
>> Yes, it is a little tricky because at this point we are working with
>> integer math (or technically fixed point I suppose).
>
> not with defining the points of the table. you do that in MATLAB or in C
> or something. your hardware gets its LUT spec from something you create
> with a math program.

I don't think I'll do that. I'll most likely code it in VHDL. Why use
yet another tool to generate the FPGA code? BTW, VHDL can do pretty
much anything other tools can do. I simulated analog components in may
last design... which is what I am doing this for. My sig gen won't work
on the right range and so I'm making a sig gen out of a module I build.

BTW, to do the calculation you describe above, it has to take into
account that the end points have to be truncated to finite resolution.
Without that it is just minimizing noise to a certain level only to have
it randomly upset. It may turn out that converting from real to finite
resolution by rounding is not optimal for such a function.


>> Rounding errors is
>> what this is all about. I've done some spreadsheet simulations and I
>> have some pretty good results. I updated it a bit to generalize it to
>> the LUT size and I keep getting the same max error counts (adjusted to
>> work with integers rather than fractions) �3 no matter what the size of
>> the interpolation factor. I don't expect this and I think I have
>> something wrong in the calculations. I'll need to resolve this.
>>
>>
>>> dunno if that is what you actually want for a sinusoidal waveform
>>> generator. i might think you want to minimize the mean square error.
>>
>> We are talking about the lsbs of a 20+ bit word. Do you think there will
>> be much of a difference in result?
>
> i'm just thinking that if you were using LUT to generate a sinusoid that
> is a *signal* (not some parameter that you use to calculate other
> stuff), then i think you want to minimize the mean square error (which
> is the power of the noise relative to the power of the sinusoid). so the
> LUT values might not be exactly the same as the sine function evaluated
> at those points.

I don't get your point really. The LUT values will deviate from the sin
function because of one thing, the resolution of the output. The sin
value calculated will deviate because of three things, input resolution,
output resolution and the accuracy of the sin generation.

I get what you are saying about the mean square error. How would that
impact the LUT values? Are you saying to bias them to minimize the
error over the entire signal? Yes, that is what I am talking about, but
I don't want to have to calculate the mean square error over the full
sine wave. Curently that's two million points. I've upped my design
parameters to a 512 entry table and and 10 bit interpolation.


>> I need to actually be able to do the
>> calculations and get this done rather than continue to work on the
>> process. Also, each end point affects two lines,
>
> what are "lines"? not quite following you.

Segments. I picture the linear approximations between end points as
lines. When you improve one line segment by moving one end point you
also impact the adjacent line segment. I have graphed these line
segments in the spread sheet and looked at a lot of them. I think the
"optimizations" make some improvement, but I'm not sure it is terribly
significant. It turns out once you have a 512 entry table, the sin
function between the end points is pretty close to linear or the
deviation from linear is in the "noise" for an 18 bit output. For
example, between LUT(511) and LUT(512) is just a step of 1. How
quadratic can that be?


>> so there are tradeoffs,
>> make one better and the other worse? It seems to get complicated very
>> quickly.
>
> if you want to define your LUT to minimize the mean square error,
> assuming linear interpolation between LUT entries, that can be a little
> complicated, but i don't think harder than what i remember doing in grad
> school. want me to show you? you get a big system of linear equations to
> get the points. if you want to minimize the maximum error (again
> assuming linear interpolation), then it's that fitting a straight line
> to that little snippet of quadratic curve bit that i mentioned.

God! Thanks but no thanks. Is this really practical to solve for the
entire curve? Remember that all points affect other points, possibly
more than just the adjacent points. If point A changes point A+1, then
it may also affect point A+2, etc. What would be more useful would be
to have an idea of just how much *more* improvement can be found.


>>>
>>> sin(t + pi/2) = cos(t)
>>
>> How does that imply a quadratic curve at 90 degrees?
>
> sin(t) = t + ... small terms when t is small
>
> cos(t) = 1 - (t^2)/2 + ... small terms when t is small
>
>> At least I think like the greats!
>
> but they beat you by a few hundred years.

Blame my parents!

--

Rick

Robert Baer

unread,
Mar 18, 2013, 9:31:41 PM3/18/13
to
glen herrmannsfeldt wrote:
> In comp.dsp Robert Baer<rober...@localnet.com> wrote:
>
> (snip, I wrote)
> I just tried it, copy and paste from the above link, and it worked.
>
> You might be sure to use Adobe reader if that matters.
>
> Also, it is page 273 in the document, page 282 in the PDF.
>
> (snip)
>
>> Problem with that..could see only front and back cover.
>> Different source perhaps?
>
I had used my "standard" 4.0 Acrobat; fails in that manner very reliably.
ver 7.0 works 100 percent; thanks.

Robert Baer

unread,
Mar 18, 2013, 9:44:51 PM3/18/13
to
glen herrmannsfeldt wrote:
> In comp.dsp Robert Baer<rober...@localnet.com> wrote:
>
> (snip, I wrote)
>>> See:
>
>>> http://ia601506.us.archive.org/8/items/bitsavers_nationaldaMOSIntegratedCircuits_20716690/1972_National_MOS_Integrated_Circuits.pdf
>
>>> The description starts on page 273.
>
> I just tried it, copy and paste from the above link, and it worked.
>
> You might be sure to use Adobe reader if that matters.
>
> Also, it is page 273 in the document, page 282 in the PDF.
>
> (snip)
>
>> Problem with that..could see only front and back cover.
>> Different source perhaps?
>
One of many reasons i hate Adobe Reader 7.0: along near the top is a
number of cons, one is the so-called hand tool and it defaults as
"selected" - BUT you see the "hand" only while moving the cursor and
when you stop that changes to the "select tool" so that it is IMPOSSIBLE
to move the page around.
When moving the hand, the mouse will highlight text if push left
button; CANNOT grab!!!!!!!!!!!!!!!
That always works in 4.0 ..
Newer versions are worse.

Robert Baer

unread,
Mar 18, 2013, 10:03:11 PM3/18/13
to
glen herrmannsfeldt wrote:
> In comp.dsp Robert Baer<rober...@localnet.com> wrote:
>
> (snip, I wrote)
>>> See:
>
>>> http://ia601506.us.archive.org/8/items/bitsavers_nationaldaMOSIntegratedCircuits_20716690/1972_National_MOS_Integrated_Circuits.pdf
>
>>> The description starts on page 273.
>
> I just tried it, copy and paste from the above link, and it worked.
>
> You might be sure to use Adobe reader if that matters.
>
> Also, it is page 273 in the document, page 282 in the PDF.
>
> (snip)
>
>> Problem with that..could see only front and back cover.
>> Different source perhaps?
>
The basic approach given there is very good; use of puny 4 or 5 digit
tables is a major weakness: sqrt(2)/2 as 0.7071 is junk; CRC in the
squares, cubes and roots section gives sqrt(2) as 1.414214 (6 digits)
and sqrt(50) as 7.07168 (also 6 digits).
And if one needs more, use the Taylor series (for reasonably small
angles) as it converges fast (Nth odd power and Nth odd factorial).
--or, use the FPU in the computer to get SIN and COS at the same time!

Reduction from 360 degrees helps:
"Split" 180 degrees: sin(-x)=-sin(x)
"Split" 90 degrees: sin(180-x)=sin(x)
"Split" 45 degrees: sin(90-x)=cos(x)
"Split" 22.5 degrees: (sqrt(0.5)*(cos(x)-sin(x))
Small angles requires less iterations for a given accuracy.

Jamie

unread,
Mar 18, 2013, 10:23:08 PM3/18/13
to
But, but, but, but,.... we work in radians!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Except maybe a carpenter!

Jamie

Eric Jacobsen

unread,
Mar 18, 2013, 11:05:39 PM3/18/13
to
If you just type "CORDIC FPGA" in Google it's the third link that
comes up.

glen herrmannsfeldt

unread,
Mar 18, 2013, 11:56:43 PM3/18/13
to
In comp.dsp Robert Baer <rober...@localnet.com> wrote:

(snip)
> I had used my "standard" 4.0 Acrobat; fails in that manner very
> reliably. ver 7.0 works 100 percent; thanks.

For a long time (years) I was using 5.0 because it had some features
that later ones didn't. I had 8.0 for the cases where that failed, but
you can't run both at once. If you run another one, it just passes
the file to the first one.

Some time ago, I wrote a program to generate LZW compressed PDF
from scanned bitmaps as PDF 1.0 files. (Among other things, only
the usual 7 bit ASCII is allowed.)

Later, I wrote on for JBIG2 compression, which I believe is 1.2
(That is, Acrobat 3.0.)

But yes, many files require newer versions.

-- glen

glen herrmannsfeldt

unread,
Mar 19, 2013, 12:05:59 AM3/19/13
to
In comp.dsp Robert Baer <rober...@localnet.com> wrote:
>> (snip)

>>> Problem with that..could see only front and back cover.

> The basic approach given there is very good; use of puny 4 or 5 digit
> tables is a major weakness: sqrt(2)/2 as 0.7071 is junk; CRC in the
> squares, cubes and roots section gives sqrt(2) as 1.414214 (6 digits)
> and sqrt(50) as 7.07168 (also 6 digits).
> And if one needs more, use the Taylor series (for reasonably small
> angles) as it converges fast (Nth odd power and Nth odd factorial).
> --or, use the FPU in the computer to get SIN and COS at the same time!

But not so bad for 43 years ago!

That was close to the beginning of MOS mask ROMs. It requires a 12V
power supply, though the outputs are still TTL compatible, using
both -12V and +5V power supplies.

> Reduction from 360 degrees helps:
> "Split" 180 degrees: sin(-x)=-sin(x)
> "Split" 90 degrees: sin(180-x)=sin(x)
> "Split" 45 degrees: sin(90-x)=cos(x)
> "Split" 22.5 degrees: (sqrt(0.5)*(cos(x)-sin(x))
> Small angles requires less iterations for a given accuracy.

-- glen

robert bristow-johnson

unread,
Mar 19, 2013, 12:18:35 AM3/19/13
to
On 3/19/13 12:05 AM, glen herrmannsfeldt wrote:
...
> It requires a 12V
> power supply, though the outputs are still TTL compatible, using
> both -12V and +5V power supplies.
>

my goodness we're old, glen.

--

r b-j r...@audioimagination.com

"Imagination is more important than knowledge."


Eric Jacobsen

unread,
Mar 19, 2013, 12:21:52 AM3/19/13
to
On Tue, 19 Mar 2013 00:18:35 -0400, robert bristow-johnson
<r...@audioimagination.com> wrote:

>On 3/19/13 12:05 AM, glen herrmannsfeldt wrote:
>...
>> It requires a 12V
>> power supply, though the outputs are still TTL compatible, using
>> both -12V and +5V power supplies.
>>
>
>my goodness we're old, glen.

I'm glad it's just you guys. ;)

robert bristow-johnson

unread,
Mar 19, 2013, 12:38:44 AM3/19/13
to
On 3/19/13 12:21 AM, Eric Jacobsen wrote:
> On Tue, 19 Mar 2013 00:18:35 -0400, robert bristow-johnson
> <r...@audioimagination.com> wrote:
>
>> On 3/19/13 12:05 AM, glen herrmannsfeldt wrote:
>> ...
>>> It requires a 12V
>>> power supply, though the outputs are still TTL compatible, using
>>> both -12V and +5V power supplies.
>>>
>>
>> my goodness we're old, glen.
>
> I'm glad it's just you guys. ;)

what was that EPROM? a 2716 or 2708 or something like that?

Jasen Betts

unread,
Mar 19, 2013, 1:12:31 AM3/19/13
to
On 2013-03-18, Eric Jacobsen <eric.j...@ieee.org> wrote:
>
> He indicated that his target is an FPGA that has no multiplies. This
> is the exact environment where the CORDIC excels. LUTs can be
> expensive in FPGAs depending on the vendor and the device.

yeah, you're right, I looked at the code on the wikipedia page and it
was full of multiplies, but looking closer reveals they are all shifts
and sign changes.

--
⚂⚃ 100% natural

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

glen herrmannsfeldt

unread,
Mar 19, 2013, 2:30:37 AM3/19/13
to
In comp.dsp robert bristow-johnson <r...@audioimagination.com> wrote:

(snip, I wrote)
>>>> It requires a 12V
>>>> power supply, though the outputs are still TTL compatible, using
>>>> both -12V and +5V power supplies.

>>> my goodness we're old, glen.

>> I'm glad it's just you guys. ;)

> what was that EPROM? a 2716 or 2708 or something like that?

As I remember it, the 2708 requires +12, +5, and -5V.
Also, the programmer has to put pulses of just the right
current/voltage and timing into the data pins when writing.

The Intel 2716 is 5V only, and for writing the data pins
are at TTL levels. Only one pin needs to have an appropriate
timed pulse.

But the TI 2716 is more like the 2708.

-- glen

rickman

unread,
Mar 19, 2013, 7:36:10 PM3/19/13
to
On 3/18/2013 9:44 PM, Robert Baer wrote:
>>
> One of many reasons i hate Adobe Reader 7.0: along near the top is a
> number of cons, one is the so-called hand tool and it defaults as
> "selected" - BUT you see the "hand" only while moving the cursor and
> when you stop that changes to the "select tool" so that it is IMPOSSIBLE
> to move the page around.
> When moving the hand, the mouse will highlight text if push left button;
> CANNOT grab!!!!!!!!!!!!!!!
> That always works in 4.0 ..
> Newer versions are worse.

I have always felt that in terms of the UI, Acrobat was one of the
*worst* commercial software packages I have ever seen and every new
version just gets worse. On top of that, they seem to produce much more
buggy code than most. I used it in one job as part of our "ISO" process
for document sign off. This part of the tool was so bad that it
actually impacted out productivity and not in a positive way. Yet, big
companies continue to buy the tool because it is the one with the most
professional image. Its not like companies actually test this stuff...

--

Rick

Robert Baer

unread,
Mar 19, 2013, 9:16:45 PM3/19/13
to
So?? Convert already - or are you one of those heathens or worse,
infidels?

Martin Brown

unread,
Mar 20, 2013, 4:25:13 AM3/20/13
to
On 16/03/2013 22:30, rickman wrote:
> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function. The two msbs of the phase define the
> quadrant. I have decided that an 8 bit address for a single quadrant is
> sufficient with an 18 bit output. Another 11 bits of phase will give me
> sufficient resolution to interpolate the sin() to 18 bits.
>
> If you assume a straight line between the two endpoints the midpoint of
> each interpolated segment will have an error of
> ((Sin(high)-sin(low))/2)-sin(mid)
>
> Without considering rounding, this reaches a maximum at the last segment
> before the 90 degree point. I calculate about 4-5 ppm which is about
> the same as the quantization of an 18 bit number.
>
> There are two issues I have not come across regarding these LUTs. One
> is adding a bias to the index before converting to the sin() value for
> the table. Some would say the index 0 represents the phase 0 and the
> index 2^n represents 90 degrees. But this is 2^n+1 points which makes a
> LUT inefficient, especially in hardware. If a bias of half the lsb is
> added to the index before converting to a sin() value the value 0 to
> 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized
> table properly. I assume this is commonly done and I just can't find a
> mention of it.
>
> The other issue is how to calculate the values for the table to give the
> best advantage to the linear interpolation. Rather than using the exact
> match to the end points stored in the table, an adjustment could be done
> to minimize the deviation over each interpolated segment. Without this,
> the errors are always in the same direction. With an adjustment the
> errors become bipolar and so will reduce the magnitude by half (approx).
> Is this commonly done? It will require a bit of computation to get
> these values, but even a rough approximation should improve the max
> error by a factor of two to around 2-3 ppm.

The other question is why bother with linear interpolation when you can
have quadratic accuracy with only a little more effort?
>
> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
> advantage of this resolution! lol
>
> One thing I learned while doing this is that near 0 degrees the sin()
> function is linear (we all knew that, right?) but near 90 degrees, the
> sin() function is essentially quadratic. Who would have thunk it?

It is obvious from the shape of the sine curve!

Classic sin(A+B) = sin(A)cos(B)+cos(A)sin(B)

If A is the actual nearest lookup and B<<1 then you can approximate

cos(B) = 1 - B^2/2
sin(B) = B

For known fixed B these can also be tabulated (exactly or approx).

Then Sin(A+B) = sin(A) + cos(A).B - sin(A).B^2/2

Sin(A) is obtained by indexing forwards in your first quadrant LUT
Cos(A) by indexing backwards from the end of the table

More lookups needed but about the same number of multiplies.

Sorting out phase is left as an exercise for the reader.

The other method popular in FFT computer codes is to store tabulated the
best approximation to the discrete roots of (1, 0)^(1/N)

In practice storing slightly tweaked values of

s = sin(B)
s2 = 2*(sin(B/2)^2)

Where B = 2pi/N

And adjust so that to within acceptable tolerance it comes back to 1,0
after N iterations. Then reset to exactly 1,0 each time around.

sn' = sn - s2*sn - s*cn
cn' = cn - s2*cn + s*sn

(subject to my memory, typos and sign convention errors)

Probably only worth it if you need both sin and cos.


--
Regards,
Martin Brown

josephkk

unread,
Mar 20, 2013, 7:05:39 PM3/20/13
to
>Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
>advantage of this resolution! lol
>
>One thing I learned while doing this is that near 0 degrees the sin()
>function is linear (we all knew that, right?) but near 90 degrees, the
>sin() function is essentially quadratic. Who would have thunk it?

Without reading any other replies, i would suggest either the CORDIC
algorithm of simpson's rule interpolation. From the rule for integration
to find any intermediate value you evaluate the curve value within the
segment covered by rule. 4 intervals on the step size you have will be
limited by the 18 bits of input. Do 20 bits of input and round to 18 or
not. About 1 PPM.

?-)

rickman

unread,
Mar 20, 2013, 7:25:02 PM3/20/13
to
On 3/20/2013 7:05 PM, josephkk wrote:
>
> Without reading any other replies, i would suggest either the CORDIC
> algorithm of simpson's rule interpolation. From the rule for integration
> to find any intermediate value you evaluate the curve value within the
> segment covered by rule. 4 intervals on the step size you have will be
> limited by the 18 bits of input. Do 20 bits of input and round to 18 or
> not. About 1 PPM.

I can't say I follow what you mean about Simpson's rule interpolation.
Interpolation is used to derive Simpson's Rule, but I don't see a way to
use an integral for obtaining an interpolation. Isn't Simpson's Rule
about approximating integrals? What am I missing?

--

Rick

josephkk

unread,
Mar 20, 2013, 8:03:25 PM3/20/13
to
Hmm. I would like to take a crack at simpson's rule interpolation for
this to measure error budget. I may not be timely but this is
interesting. joseph underscore barrett at sbcglobal daht cahm


?-)

josephkk

unread,
Mar 20, 2013, 8:11:18 PM3/20/13
to
On Sun, 17 Mar 2013 17:56:21 -0400, rickman <gnu...@gmail.com> wrote:

>On 3/16/2013 8:02 PM, lang...@fonz.dk wrote:
>>> --
>>>
>>> Rick
>>
>> why not skip the lut and just do a full sine approximation
>>
>> something like this, page 53-54
>>
>> http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/ADSP-2181/Using%20the%20ADSP-2100%20Family%20Volume%201/Using%20the%20ADSP-2100%20Family%20Volume%201.pdf
>
>I'm not sure this would be easier. The LUT an interpolation require a
>reasonable amount of logic. This requires raising X to the 5th power
>and five constant multiplies. My FPGA doesn't have multipliers and this
>may be too much for the poor little chip. I suppose I could do some of
>this with a LUT and linear interpolation... lol

No multipliers(?); ouch. My Simpson's rule method may fit but it will
need a trapezoidal strip out of a full Wallace tree multiplier. Preferably
three (speed issue, that may not be that big for you). How do you do the
linear interpolation without a multiplier?

?-)

josephkk

unread,
Mar 20, 2013, 8:22:48 PM3/20/13
to
On Mon, 18 Mar 2013 14:10:54 -0400, Jerry Avins <j...@ieee.org> wrote:

>
>>> I see, I assumed you had an MCU (or fpga) with multipliers
>>>
>>> considered CORDIC?
>>
>> I should learn that. I know it has been used to good effect in FPGAs.
>> Right now I am happy with the LUT, but I would like to learn more about
>> CORDIC. Any useful referrences?
>
>http://www.andraka.com/files/crdcsrvy.pdf should be a good start. Do you
>remember Ray Andraka?
>
>Jerry
>--
>Engineering is the art of making what you want from things you can get.
>¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

Thranx a bunch. But then you are a goto kind of guy.

?-)

josephkk

unread,
Mar 20, 2013, 9:07:43 PM3/20/13
to
On Sun, 17 Mar 2013 18:02:27 -0400, rickman <gnu...@gmail.com> wrote:

>
>>> One thing I learned while doing this is that near 0 degrees the sin()
>>> function is linear (we all knew that, right?) but near 90 degrees, the
>>> sin() function is essentially quadratic. Who would have thunk it?
>>>
>> Sounds like you are making excellent improvements on the standard ho-hum
>> algorithms; the net result will be superior to anything done out there
>> (commercially).
>> With the proper offsets, one needs only 22.5 degrees of lookup ("bounce"
>> off each multiple of 45 degrees).
>
>I can't say I follow this. How do I get a 22.5 degree sine table to
>expand to 90 degrees?
>
>As to my improvements, I can see where most implementations don't bother
>pushing the limits of the hardware, but I can't imagine no one has done
>this before. I'm sure there are all sorts of apps, especially in older
>hardware where resources were constrained, where they pushed for every
>drop of precision they could get. What about space apps? My
>understanding is they *always* push their designs to be the best they
>can be.

I agree. 45 degree sine and cosine table will do the job as will a 90
degree sine or cosine table.

?-)

josephkk

unread,
Mar 20, 2013, 9:24:13 PM3/20/13
to
On Sun, 17 Mar 2013 18:44:07 -0400, rickman <gnu...@gmail.com> wrote:

>On 3/17/2013 6:20 PM, John Larkin wrote:
>>
>> It would be interesting to distort the table entries to minimize THD. The
>> problem there is how to measure very low THD to close the loop.
>
>It could also be a difficult problem to solve without some sort of
>exhaustive search. Each point you fudge affects two curve segments and
>each curve segment is affected by two points. So there is some degree
>of interaction.
>
>Anyone know if there is a method to solve such a problem easily? I am
>sure that my current approach gets each segment end point to within ±1
>lsb and I suppose once you measure THD in some way it would not be an
>excessive amount of work to tune the 256 end points in a few passes
>through. This sounds like a tree search with pruning.
>
>I'm assuming there would be no practical closed form solution. But
>couldn't the THD be calculated in a simulation rather than having to be
>measured on the bench?

The calculation can be done (most spice versions will do this, even excel
can be used), but that doesn't mean that bench testing will give the same
results. The calculation must be done on the final value series.

?-)

josephkk

unread,
Mar 20, 2013, 11:53:13 PM3/20/13
to
On Sun, 17 Mar 2013 17:22:59 -0700, John Larkin
<jjla...@highNOTlandTHIStechnologyPART.com> wrote:

>
>>I don't think you are clear on this. The table and linear interp are as
>>good as they can get with the exception of one or possibly two lsbs to
>>minimize the error in the linear interp. These is no way to correct for
>>this by adding a second harmonic, etc. Remember, this is not a pure
>>table lookup, it is a two step approach.
>
>Why can't some harmonic component be added to the table points to improve
>downstream distortion? Of course, you'd need a full 360 degree table, not a
>folded one.

How do you get that? All harmonics would be within the 90 degree table.
Harmonic phase shifts is a different matter though, that would take
careful characterization of production quantities of the final product,
$$$$$.

?-)

glen herrmannsfeldt

unread,
Mar 21, 2013, 12:18:04 AM3/21/13
to
In comp.dsp josephkk <joseph_...@sbcglobal.net> wrote:

(snip, someone wrote)
>>I'm not sure this would be easier. The LUT an interpolation require a
>>reasonable amount of logic. This requires raising X to the 5th power
>>and five constant multiplies. My FPGA doesn't have multipliers and this
>>may be too much for the poor little chip. I suppose I could do some of
>>this with a LUT and linear interpolation... lol

> No multipliers(?); ouch. My Simpson's rule method may fit but it will
> need a trapezoidal strip out of a full Wallace tree multiplier. Preferably
> three (speed issue, that may not be that big for you). How do you do the
> linear interpolation without a multiplier?

The LUT is indexed by the high (4) bits of the input and the low bits
to interpolate with. So it is, pretty much, 16 different interpolation tables
which, in other words, is multiply by table lookup.

-- glen

josephkk

unread,
Mar 21, 2013, 11:02:56 AM3/21/13
to
On Mon, 18 Mar 2013 19:17:53 -0400, Phil Hobbs
<pcdhSpamM...@electrooptical.net> wrote:

>On 3/18/2013 10:26 AM, John Larkin wrote:
>> On Mon, 18 Mar 2013 10:12:45 -0400, Phil Hobbs
>> <pcdhSpamM...@electrooptical.net> wrote:
>>
>>> On 03/17/2013 08:27 PM, John Larkin wrote:
>>>> On Sun, 17 Mar 2013 19:31:38 -0400, Phil Hobbs
>>>> <pcdhSpamM...@electrooptical.net> wrote:
>>>>
>>>>> On 3/17/2013 6:44 PM, rickman wrote:
>>>>>> On 3/17/2013 6:20 PM, John Larkin wrote:
>>>>>>>
>>>>>>> It would be interesting to distort the table entries to minimize THD. The
>>>>>>> problem there is how to measure very low THD to close the loop.
>>>>>>
>>>>>> It could also be a difficult problem to solve without some sort of
>>>>>> exhaustive search. Each point you fudge affects two curve segments and
>>>>>> each curve segment is affected by two points. So there is some degree
>>>>>> of interaction.
>>>>>>
>>>>>> Anyone know if there is a method to solve such a problem easily? I am
>>>>>> sure that my current approach gets each segment end point to within ±1
>>>>>> lsb and I suppose once you measure THD in some way it would not be an
>>>>>> excessive amount of work to tune the 256 end points in a few passes
>>>>>> through. This sounds like a tree search with pruning.
>>>>>>
>>>>>> I'm assuming there would be no practical closed form solution. But
>>>>>> couldn't the THD be calculated in a simulation rather than having to be
>>>>>> measured on the bench?
>>>>>>
>>>>>
>>>>> The simple method would be to compute a zillion-point FFT. Since the
>>>>> function is really truly periodic, the FFT gives you the right answer
>>>>> for the continuous-time Fourier transform, provided you have enough
>>>>> points that you can see all the relevant harmonics.
>>>>>
>>>>
>>>> All you need now is a digitizer or spectrum analyzer that has PPM distortion!
>>>>
>>>> We've bumped into this problem, trying to characterize some of our ARBs. The
>>>> best thing is to use a passive lowpass or notch filter (with very good Ls and
>>>> Cs!) to remove the fundamental and analyze what's left.
>>>>
>>>>
>>>
>>> I have an HP 339A you can borrow. ;)
>>>
>>> Cheers
>>>
>>> Phil Hobbs
>>
>> That's for audio. Who cares about distortion for audio? And it gets nowhere near
>> PPM turf.
>>
>>
>
>You're no fun anymore. :-)
>
>It does have a 0.01% FS range, so you'd expect to get a lot better than
> -80 dB.
>
>But for fast stuff, I'd probably do the same as you--filter the
>daylights out of it and amplify the residuals.
>
>Cheers
>
>Phil Hobbs

Which is how most old school distortion analyzers work. Pop'tronics did a
four part series on one that used auto-tuned parallel-t filters with over
80 db fundamental notch rejection in the late 60s or early 70s. Realy a
slick design in a lot of ways, especially for a hobby mag.

?-)

John Larkin

unread,
Mar 21, 2013, 11:40:28 AM3/21/13
to
On Thu, 21 Mar 2013 08:02:56 -0700, josephkk <joseph_...@sbcglobal.net>
wrote:

>On Mon, 18 Mar 2013 19:17:53 -0400, Phil Hobbs
><pcdhSpamM...@electrooptical.net> wrote:
>
>>On 3/18/2013 10:26 AM, John Larkin wrote:
>>> On Mon, 18 Mar 2013 10:12:45 -0400, Phil Hobbs
>>> <pcdhSpamM...@electrooptical.net> wrote:
>>>
>>>> On 03/17/2013 08:27 PM, John Larkin wrote:
>>>>> On Sun, 17 Mar 2013 19:31:38 -0400, Phil Hobbs
>>>>> <pcdhSpamM...@electrooptical.net> wrote:
>>>>>
>>>>>> On 3/17/2013 6:44 PM, rickman wrote:
>>>>>>> On 3/17/2013 6:20 PM, John Larkin wrote:
>>>>>>>>
>>>>>>>> It would be interesting to distort the table entries to minimize THD. The
>>>>>>>> problem there is how to measure very low THD to close the loop.
>>>>>>>
>>>>>>> It could also be a difficult problem to solve without some sort of
>>>>>>> exhaustive search. Each point you fudge affects two curve segments and
>>>>>>> each curve segment is affected by two points. So there is some degree
>>>>>>> of interaction.
>>>>>>>
>>>>>>> Anyone know if there is a method to solve such a problem easily? I am
>>>>>>> sure that my current approach gets each segment end point to within ą1
That's how the HP339A works, with photoresistors used to fine tune the notches.
There was an equivalent Heathkit.

>
>?-)

It gets challenging at higher frequencies (MHz) and really low distortion
levels.


--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators

John Larkin

unread,
Mar 21, 2013, 1:14:53 PM3/21/13
to
On Wed, 20 Mar 2013 20:53:13 -0700, josephkk
<joseph_...@sbcglobal.net> wrote:

>On Sun, 17 Mar 2013 17:22:59 -0700, John Larkin
><jjla...@highNOTlandTHIStechnologyPART.com> wrote:
>
>>
>>>I don't think you are clear on this. The table and linear interp are as
>>>good as they can get with the exception of one or possibly two lsbs to
>>>minimize the error in the linear interp. These is no way to correct for
>>>this by adding a second harmonic, etc. Remember, this is not a pure
>>>table lookup, it is a two step approach.
>>
>>Why can't some harmonic component be added to the table points to improve
>>downstream distortion? Of course, you'd need a full 360 degree table, not a
>>folded one.
>
>How do you get that? All harmonics would be within the 90 degree table.

Second?

>Harmonic phase shifts is a different matter though, that would take
>careful characterization of production quantities of the final product,
>$$$$$.

Of course you'd have to null each harmonic for gain and phase,
preferably on each production unit, but it wouldn't be difficult or
expensive *if* you had a distortion analyzer with the required PPM
performance. We regularly do automated calibrations, polynomial curve
fits and such, more complex than this.


--

John Larkin Highland Technology, Inc

jlarkin at highlandtechnology dot com
http://www.highlandtechnology.com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom laser drivers and controllers
Photonics and fiberoptic TTL data links
VME thermocouple, LVDT, synchro acquisition and simulation

rickman

unread,
Mar 21, 2013, 5:49:41 PM3/21/13
to
I can do multiplies without a multiplier block. It gets to be
problematic to do a *lot* of multiplies. The interp will be done in a
shift/add type multiplier as well as the 8 bit amplitude multiplier for
the scale factor (which is already in an existing product). The CODEC I
am using has 128 clock cycle per sample so there is time to use a serial
Booth's multiplier or two.

--

Rick

rickman

unread,
Mar 21, 2013, 5:51:42 PM3/21/13
to
Oh, I thought you were talking about using Simpsom's rule as part of the
interpolation. You are talking about using it to estimate the integral
of the error. Great idea! I'll shoot you an email.

--

Rick

rickman

unread,
Mar 21, 2013, 6:20:03 PM3/21/13
to
That is the poor man's LUT approach where the resources are precious.
It requires a "coarse" table using the high and mid bits of the input
and a second table indexed by the high and low bits of the input for a
correction. I have a 512x18 LUT available and in fact, it is dual
ported for simultaneous access if needed. This by itself gives pretty
good results. The linear approximation would be done with a
serial/parallel Booth's two technique using n clock cycles where n is
the number of bits in the multiplier. I've got some time for each
sample and expect to share the hardware between two channels.

--

Rick

robert bristow-johnson

unread,
Mar 21, 2013, 6:37:47 PM3/21/13
to
On 3/21/13 6:20 PM, rickman wrote:
> On 3/21/2013 12:18 AM, glen herrmannsfeldt wrote:
...
>>
>> The LUT is indexed by the high (4) bits of the input and the low bits
>> to interpolate with. So it is, pretty much, 16 different interpolation
>> tables
>> which, in other words, is multiply by table lookup.
>
> That is the poor man's LUT approach where the resources are precious. It
> requires a "coarse" table using the high and mid bits of the input and a
> second table indexed by the high and low bits of the input for a
> correction.

that's sorta the case for even the middle-class man's LUT and
interpolation. you're always splitting a continuous-time parameter
(however many bits are used to express it) into an integer part (that
selects the set of samples used in the interpolation) and a fractional
part (that defines how this set of samples are to be combined to be the
interpolated value).

rickman

unread,
Mar 21, 2013, 6:45:02 PM3/21/13
to
On 3/21/2013 1:14 PM, John Larkin wrote:
> On Wed, 20 Mar 2013 20:53:13 -0700, josephkk
> <joseph_...@sbcglobal.net> wrote:
>
>> On Sun, 17 Mar 2013 17:22:59 -0700, John Larkin
>> <jjla...@highNOTlandTHIStechnologyPART.com> wrote:
>>
>>>
>>>> I don't think you are clear on this. The table and linear interp are as
>>>> good as they can get with the exception of one or possibly two lsbs to
>>>> minimize the error in the linear interp. These is no way to correct for
>>>> this by adding a second harmonic, etc. Remember, this is not a pure
>>>> table lookup, it is a two step approach.
>>>
>>> Why can't some harmonic component be added to the table points to improve
>>> downstream distortion? Of course, you'd need a full 360 degree table, not a
>>> folded one.
>>
>> How do you get that? All harmonics would be within the 90 degree table.
>
> Second?

Actually this would not work for anything except odd harmonics. The way
the table is folded they have to be symmetrical about the 90 degree
point as well as the X axis at 0 and 90 degrees. Odd harmonics would do
this if they are phased correctly. I don't think it would be too useful
that way.


>> Harmonic phase shifts is a different matter though, that would take
>> careful characterization of production quantities of the final product,
>> $$$$$.
>
> Of course you'd have to null each harmonic for gain and phase,
> preferably on each production unit, but it wouldn't be difficult or
> expensive *if* you had a distortion analyzer with the required PPM
> performance. We regularly do automated calibrations, polynomial curve
> fits and such, more complex than this.

Might as well use a separate table for the nulling, it would be much
simpler.

--

Rick

Jamie

unread,
Mar 21, 2013, 9:21:57 PM3/21/13
to
What ever happen to the good old days of generating sine waves via
digital instead of using these DDS chips that no one seems to know
exactly how they do it? Well, that seems to be the picture I am getting.
----
You need an incremintor (32 bit) for example, a phase accumulator (32
bits) and lets say a 12 bit DAC.

All this does is generate a +/- triangle reference via the phase
accumulator. The 31th bit would be the sign which indicates when to
start generating decal instead of acel, in the triangle output.

Using integers as your phase acc register, this will naturally wrap
around. THe idea is to keep summing with the incriminator, which has
been initially calculated verses the update time (sample) and desired freq.

Now since the DAC is only 12 bits, you use the MSB's as the ref for
the DAC and the LSB (20 bits) for the fixed point integer math to get
the needed precision.

Now that we have a nice triangle wave that can be past to a hybrid log
amp, the rest is trivial.

Actually, today I was looking at one of our circuits that seem to be
generating trapezoid on the (-) peak instead of sine and that was a
log amp. Seems the transistor in the feed back failed some how and
it most likely has been like that for a long time, no one never noticed
it before...

And on top of that, years ago I've made sinewave oscillators from the
RC point of a 555 timer that drove a log amp designed to give rather
clean sinewaves.

Jamie

lang...@fonz.dk

unread,
Mar 21, 2013, 8:31:10 PM3/21/13
to
On Mar 22, 1:09 am, Jamie
<jamie_ka1lpa_not_valid_after_ka1l...@charter.net> wrote:
> rickman wrote:
> > On 3/21/2013 1:14 PM, John Larkin wrote:
>
> >> On Wed, 20 Mar 2013 20:53:13 -0700, josephkk
> >> <joseph_barr...@sbcglobal.net>  wrote:
>
> >>> On Sun, 17 Mar 2013 17:22:59 -0700, John Larkin
> >>> <jjlar...@highNOTlandTHIStechnologyPART.com>  wrote:
well the DDS chips work just like you describe, an accumulator
but instead of generating a triangle and shaping it, the MSBs index a
sinewave table going to a DAC

you can do with one quadrant of sine table, with MSB-1 you can
chose 2nd,4rd quadrant, invert the bits indexing the sinewave (makes
the index count "in reverse")

MSB choose 1st,2nd or 3rd,4th quadrant, for 3rd and 4th negate the
sinewave output

-Lasse

Jamie

unread,
Mar 21, 2013, 10:38:39 PM3/21/13
to
Well I kind of figure that was the operation taking place since I've
made circuits like that via RS485 but I used a log amp instead. Using
a single quadrant is all you need and you can get very detailed with
the linear to sin translations.

Many of these DDS FG's seem to show that anything other than a sine
wave, forces them to reduce output BW. I can only assume this due to a
bottle neck from the uC to the chip to update pattern configuration via
SPI or C2I.. I guess a parallel version would make a vast improvement.

Jamie

glen herrmannsfeldt

unread,
Mar 21, 2013, 9:36:24 PM3/21/13
to
In comp.dsp robert bristow-johnson <r...@audioimagination.com> wrote:
> On 3/21/13 6:20 PM, rickman wrote:

(snip, I wrote)
>>> The LUT is indexed by the high (4) bits of the input and the low bits
>>> to interpolate with. So it is, pretty much, 16 different interpolation
>>> tables
>>> which, in other words, is multiply by table lookup.

>> That is the poor man's LUT approach where the resources are precious. It
>> requires a "coarse" table using the high and mid bits of the input and a
>> second table indexed by the high and low bits of the input for a
>> correction.

You could instead index one ROM with eight bits, giving the
interpolation size (delta y) and then index another ROM with that,
and the low for data bits. That would allow closer matching of
the interpolation slope to the actual data.

Seems to me that someone started selling MOS ROMs and then designed
some ready to sell.

> that's sorta the case for even the middle-class man's LUT and
> interpolation. you're always splitting a continuous-time parameter
> (however many bits are used to express it) into an integer part (that
> selects the set of samples used in the interpolation) and a fractional
> part (that defines how this set of samples are to be combined to be the
> interpolated value).

One would have to go through the numbers to see how much difference
it makes. This was before the EPROM. They final mask layer was generated
from the data file from the user (in large enough quantities to make
it worthwhile) or for the special case ROMs in the data book.

-- glen

John Larkin

unread,
Mar 21, 2013, 9:45:15 PM3/21/13
to
On Thu, 21 Mar 2013 18:45:02 -0400, rickman <gnu...@gmail.com> wrote:

>On 3/21/2013 1:14 PM, John Larkin wrote:
>> On Wed, 20 Mar 2013 20:53:13 -0700, josephkk
>> <joseph_...@sbcglobal.net> wrote:
>>
>>> On Sun, 17 Mar 2013 17:22:59 -0700, John Larkin
>>> <jjla...@highNOTlandTHIStechnologyPART.com> wrote:
>>>
>>>>
>>>>> I don't think you are clear on this. The table and linear interp are as
>>>>> good as they can get with the exception of one or possibly two lsbs to
>>>>> minimize the error in the linear interp. These is no way to correct for
>>>>> this by adding a second harmonic, etc. Remember, this is not a pure
>>>>> table lookup, it is a two step approach.
>>>>
>>>> Why can't some harmonic component be added to the table points to improve
>>>> downstream distortion? Of course, you'd need a full 360 degree table, not a
>>>> folded one.
>>>
>>> How do you get that? All harmonics would be within the 90 degree table.
>>
>> Second?
>
>Actually this would not work for anything except odd harmonics. The way
>the table is folded they have to be symmetrical about the 90 degree
>point as well as the X axis at 0 and 90 degrees. Odd harmonics would do
>this if they are phased correctly. I don't think it would be too useful
>that way.

Just looking at a sine at F and one at 2F, it's obvious that the 90
degree folded table can't make the 2F component. The 2nd harmonic has
to change sign between 0-90 degrees and 90-180 but can't if the same
90 degree table is used.

>
>
>>> Harmonic phase shifts is a different matter though, that would take
>>> careful characterization of production quantities of the final product,
>>> $$$$$.
>>
>> Of course you'd have to null each harmonic for gain and phase,
>> preferably on each production unit, but it wouldn't be difficult or
>> expensive *if* you had a distortion analyzer with the required PPM
>> performance. We regularly do automated calibrations, polynomial curve
>> fits and such, more complex than this.
>
>Might as well use a separate table for the nulling, it would be much
>simpler.

It's really about the same, but I guess if the main table is heavily
interpolated, it would make sense to have a little distortion table
off to the side, maybe even with its own DAC.

John Larkin

unread,
Mar 21, 2013, 9:46:45 PM3/21/13
to
We know how to do it. We do DDS ourselves, with an FPGA and a DAC.
It's really pretty simple.

Spehro Pefhany

unread,
Mar 21, 2013, 10:48:46 PM3/21/13
to
On Thu, 21 Mar 2013 21:38:39 -0500, the renowned Jamie
<jamie_ka1lpa_not_v...@charter.net> wrote:

.
>
> Many of these DDS FG's seem to show that anything other than a sine
>wave, forces them to reduce output BW. I can only assume this due to a
>bottle neck from the uC to the chip to update pattern configuration via
>SPI or C2I.. I guess a parallel version would make a vast improvement.
>
>Jamie

Compare the bandwidth of a sine wave of frequency f vs. a sawtooth of
the same frequency.


Best regards,
Spehro Pefhany
--
"it's the network..." "The Journey is the reward"
sp...@interlog.com Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog Info for designers: http://www.speff.com

Martin Brown

unread,
Mar 22, 2013, 3:37:31 AM3/22/13
to
On 17/03/2013 22:57, John Larkin wrote:
> On Sun, 17 Mar 2013 18:44:07 -0400, rickman <gnu...@gmail.com> wrote:
>
>> On 3/17/2013 6:20 PM, John Larkin wrote:
>>>
>>> It would be interesting to distort the table entries to minimize THD. The
>>> problem there is how to measure very low THD to close the loop.
>>
>> It could also be a difficult problem to solve without some sort of
>> exhaustive search. Each point you fudge affects two curve segments and
>> each curve segment is affected by two points. So there is some degree
>> of interaction.
>>
>> Anyone know if there is a method to solve such a problem easily? I am
>> sure that my current approach gets each segment end point to within �1
>> lsb and I suppose once you measure THD in some way it would not be an
>> excessive amount of work to tune the 256 end points in a few passes
>> through. This sounds like a tree search with pruning.
>
> If I could measure the THD accurately, I'd just blast in a corrective harmonic
> adder to the table for each of the harmonics. That could be scientific or just
> iterative. For the 2nd harmonic, for example, add a 2nd harmonic sine component
> into all the table points, many of which wouldn't even change by an LSB, to null
> out the observed distortion. Gain and phase, of course.
>
>>
>> I'm assuming there would be no practical closed form solution. But
>> couldn't the THD be calculated in a simulation rather than having to be
>> measured on the bench?
>
> Our experience is that the output amplifiers, after the DAC and lowpass filter,
> are the nasty guys for distortion, at least in the 10s of MHz. Lots of
> commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table
> correction would unfortunately be amplitude dependent, so it gets messy fast.

Is it really "pure" second harmonic distortion or a tiny mismatch in the
gain bandwidth product of the power amplifier that could be tweaked out
by a small change in the digital amplitude of the negative cycle?

--
Regards,
Martin Brown

Jerry Avins

unread,
Mar 22, 2013, 1:15:28 PM3/22/13
to
On 3/18/2013 7:18 PM, glen herrmannsfeldt wrote:
> In comp.dsp rickman <gnu...@gmail.com> wrote:
>> On 3/18/2013 2:10 PM, Jerry Avins wrote:
>
> (snip, ending on CORDIC)
>
>>> http://www.andraka.com/files/crdcsrvy.pdf should be a good start. Do you
>>> remember Ray Andraka?
>
>> Yes, of course. I should have thought of that. I don't go to his site
>> often, but his stuff is always excellent. I may have even downloaded
>> this paper before. I took a look at the CORDIC once a few years ago and
>> didn't get too far. I'm happy with my current approach for the moment,
>> but I will dig into this more another time.
>
> I once thought I pretty well understood binary CORDIC, but it
> is also used in decimal on many pocket calculators. I never got
> far enough to figure out how they did that.

The hp35 used CORDIC to compute trig functions. Its basic math
operations were BDC, using decimal arithmetic. Most CORDIC
implementations skip "irrelevant" rotations, so the scale of the result
isn't constant for all inputs. To get sine, the hp35 computed both sine
and cosine (little extra cost for getting both) which had the same
scale, then divided to get the cotangent-- the scale factors cancel.
1/sqrt(cot(x)^2 + 1) = sin(x). Since the arithmetic functions were all
in place, it was the fastest way to the result.

So what does "simple" really mean?
�����������������������������������������������������������������������
It is loading more messages.
0 new messages