Spectral Purity Measurement

R

rickman

Guest
I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS. My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be hard
to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code. The implementation will be synthesizable and the
measurement code will not. I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.

--

Rick
 
On Fri, 19 Dec 2014 10:06:50 -0500, rickman wrote:

I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS. My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be hard
to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code. The implementation will be synthesizable and the
measurement code will not. I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.

If you mean a real circuit and not an FPGA configuration, and if you have
any analog components in there, then you need to measure the thing with a
spectrum analyzer. No spectrum analyzer in the world has a 240dB dynamic
range, so you'd need to notch out the carrier with something absurdly deep
and narrow-band, like a crystal filter. Measuring spurs down to that
level would be a significant challenge for an experienced RF engineer -- I
don't know that I could, or if I'd trust my results without double-
checking from someone who did it every day.

Even if you're measuring this numerically I think you need to do some
careful and close analysis of whatever method you choose.

An FFT that short will only be good to -240dBc if it collects an exact
integer number of samples -- if it collects more or less, the artifacts
from truncating the series will overwhelm any real effects.

-240dBc implies 40 bits of precision, so you'll need to be sure that the
error build-up in your FFT (or whatever) doesn't exceed that. You're
talking a 12-stage FFT, and double-precision floating point has a 52-bit
mantissa, so if everything stacks up wrong you've just blown your error
budget. Such errors tend to be smeared out rather than to build up -- but
you need to check with analysis to be sure.

If you can, it may be best to generate a file of DDS outputs, and then do
the analysis in some separate package like Scilab, Octave or Matlab. Even
there, however, I would be concerned about the needed precision, and I'd
seriously consider finding an FFT package that is, or can be compiled to,
a quad-precision version.

All of this really makes me want to ask _why_ -- if you're working in some
application where you need to keep your DDS that spectrally pure, then
chances are good that even with an absolutely perfect DDS, you're already
screwed. You may want to review how well this thing is going to work when
your input signal has noise, and has the inevitable distortion that comes
from being measured by analog components.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On Friday, December 19, 2014 10:07:02 AM UTC-5, rickman wrote:
I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS.

I've posted some notes to comp.arch.fpga about this on occasion; the following post provides some analysis examples and links to modeling software:

https://groups.google.com/forum/#!msg/comp.arch.fpga/MAyeKC9SRDI/H9vE28kvuF0J

I'm thinking a fairly large FFT, 2048 or maybe 4096 bins in floating point.

Typically you'll need a much bigger FFT than that to see the close in stuff, the dds_oddities.pdf examples from the above links used variable sizes up to 2 Mpoints.

Another difficulty in seeing these close in spurs with an FFT is that the "Grand Repetition Period" of a DDS with a large phase accumulator is so long that brute force FFT analysis of the whole truncation/quantization sequence is practically impossible.

You can make some headway on this (for certain sequences) by precessing the phase of the DDS to near one of the truncation transients such that the transient occurs midway through the FFT input record.

-Brian
 
Earlier, I wrote:
I've posted some notes to comp.arch.fpga about this
on occasion; the following post provides some analysis
examples and links to modeling software:

https://groups.google.com/forum/#!msg/comp.arch.fpga/MAyeKC9SRDI/H9vE28kvuF0J

Updated location of the broken link[2] from that old post:
https://sites.google.com/site/fpgastuff/dds_oddities.pdf
"
"[1] close in DDS phase noise artifacts:
" http://groups.google.com/group/comp.arch.fpga/msg/0b1a2f345aa1c350
"
"[2] plots of DDS spur pileups ( modeling numeical spurs only )
" http://members.aol.com/fpgastuff/dds_oddities.pdf
"
"[3] related posts about the pdf file in [2]
" http://groups.yahoo.com/group/spectrumanalyzer/message/1027
" http://groups.yahoo.com/group/spectrumanalyzer/message/1038
"

-Brian
 
-240dbc is a very low signal level and will be below the noise floor of
the environment being tested in. With a good spectrum analyser you may
get down to -160dbm. Are you really sure about the power level.

Compare with
http://www.rohde-schwarz.co.uk/en/product/fsu-productstartpage_63493-7993.html

On 19/12/14 15:06, rickman wrote:
I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS. My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be hard
to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code. The implementation will be synthesizable and the
measurement code will not. I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.
 
Andy Botterill wrote:
-240dbc is a very low signal level and will be below the noise floor of
the environment being tested in. With a good spectrum analyser you may
get down to -160dbm. Are you really sure about the power level.

Compare with
http://www.rohde-schwarz.co.uk/en/product/fsu-productstartpage_63493-7993.html


On 19/12/14 15:06, rickman wrote:
I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS. My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be hard
to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code. The implementation will be synthesizable and the
measurement code will not. I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.

Are decibels used differently for dBc than for other usages? I would
have thought that 6 orders of magnitude (1 ppm) was -120 dB not -240 dB
20 * log10 (10**-6) = 20 * -6 = -120

--
Gabor
 
On Fri, 19 Dec 2014 17:22:14 -0500, GaborSzakacs wrote:

Andy Botterill wrote:
-240dbc is a very low signal level and will be below the noise floor of
the environment being tested in. With a good spectrum analyser you may
get down to -160dbm. Are you really sure about the power level.

Compare with
http://www.rohde-schwarz.co.uk/en/product/fsu-
productstartpage_63493-7993.html


On 19/12/14 15:06, rickman wrote:
I want to analyze the output of a DDS circuit and am wondering if an
FFT is the best way to do this. I'm mainly concerned with the "close
in" spurs that are often generated by a DDS. My analysis of the
errors involved in the sine generation is that they will be on the
order of 1 ppm which I believe will be -240 dBc. Is that right?
Sounds far too easy to get such good results. I guess I'm worried
that it will be hard to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code. The implementation will be synthesizable and the
measurement code will not. I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.



Are decibels used differently for dBc than for other usages? I would
have thought that 6 orders of magnitude (1 ppm) was -120 dB not -240 dB
20 * log10 (10**-6) = 20 * -6 = -120

No, Rick made an arithmetic mistake, or he doubled his dB twice. And I
didn't notice in my posting where I went on and on about the difficulty of
verifying -240dBc, and the uselessness thereof. (-120dBc is still
exceedingly hard to achieve in analog-land, and not necessarily useful in
digital-land unless your goal is to be so damned good that you never have
to worry about that being the source of your problems).

dBc simply means "dB referenced to the carrier", so a signal that's -20dBc
is 1/10th the amplitude, and 1/100th the power, of the carrier.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On 12/19/14 10:06 AM, rickman wrote:
I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS.

i still get the concepts of DDS and NCO mixed up. what are the differences?

is this a circuit with an analog output? or are you looking at the
stream of samples before they get to the D/A converter?


My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too easy
to get such good results. I guess I'm worried that it will be hard to
measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code.

okay so you're at the samples before they're output to the D/A. instead
of, i presume windowing with a decent window (like a Kaiser, but a
Hamming might do in a pinch), using the FFT and looking for how clean
the spike is, i would suggest a notch filter tuned to the frequency that
you *know* is coming out of the NCO because you know the phase
increment. or is this DDS generated differently than an NCO, like using
some recursion equation? anyway, whatever comes out of that
precisely-tuned, narrowband notch filter is the error signal. if there
are spurs or whatever distortion, it will be in that notch filter output.

The implementation will be synthesizable and the
measurement code will not.

i dunno what synthesizable code is.

I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.

i wouldn't bother with the FFT unless you want to run it on the notch
filter output. if you have an FFT in your toolbag, it sounds like your
code is floating point. is that the case? because with "vhdl", that
sounds like it might be a fixed-point architecture.


--

r b-j rbj@audioimagination.com

"Imagination is more important than knowledge."
 
On Fri, 19 Dec 2014 18:19:24 -0500, robert bristow-johnson wrote:

On 12/19/14 10:06 AM, rickman wrote:
I want to analyze the output of a DDS circuit and am wondering if an
FFT is the best way to do this. I'm mainly concerned with the "close
in" spurs that are often generated by a DDS.

i still get the concepts of DDS and NCO mixed up. what are the
differences?

is this a circuit with an analog output? or are you looking at the
stream of samples before they get to the D/A converter?


My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be hard
to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code.

okay so you're at the samples before they're output to the D/A. instead
of, i presume windowing with a decent window (like a Kaiser, but a
Hamming might do in a pinch), using the FFT and looking for how clean
the spike is, i would suggest a notch filter tuned to the frequency that
you *know* is coming out of the NCO because you know the phase
increment. or is this DDS generated differently than an NCO, like using
some recursion equation? anyway, whatever comes out of that
precisely-tuned, narrowband notch filter is the error signal. if there
are spurs or whatever distortion, it will be in that notch filter
output.

The implementation will be synthesizable and the measurement code will
not.

i dunno what synthesizable code is.

Synthesizable code is code that the tool knows how to make into FPGA
firmware. HDL projects generally have both a hardware description
component which is synthesizable (or at least one fervently hopes) and a
test component which generally is not. The tools will simulate the whole
design under the control of the test component.

I'm thinking a fairly large FFT, 2048 or maybe 4096 bins in floating
point.

i wouldn't bother with the FFT unless you want to run it on the notch
filter output. if you have an FFT in your toolbag, it sounds like your
code is floating point. is that the case? because with "vhdl", that
sounds like it might be a fixed-point architecture.

The test component can have floating point.

For that matter, FPGAs are big enough to support code bloat these days;
it's not unheard of to have floating-point math on them, although I think
that fixed-point math is still the most common.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On 12/19/2014 9:26 PM, Brian Davis wrote:
Earlier, I wrote:

I've posted some notes to comp.arch.fpga about this
on occasion; the following post provides some analysis
examples and links to modeling software:

https://groups.google.com/forum/#!msg/comp.arch.fpga/MAyeKC9SRDI/H9vE28kvuF0J


Updated location of the broken link[2] from that old post:
https://sites.google.com/site/fpgastuff/dds_oddities.pdf
"
"[1] close in DDS phase noise artifacts:
" http://groups.google.com/group/comp.arch.fpga/msg/0b1a2f345aa1c350
"
"[2] plots of DDS spur pileups ( modeling numeical spurs only )
" http://members.aol.com/fpgastuff/dds_oddities.pdf
"
"[3] related posts about the pdf file in [2]
" http://groups.yahoo.com/group/spectrumanalyzer/message/1027
" http://groups.yahoo.com/group/spectrumanalyzer/message/1038
"

Thank you for the references. The PDF file was especially interesting
with all the plots of effects of PT and AQ.

Just curious, what is up with the AOL thing? I couldn't view the link
without joining. What good is posting content people can't view?

--

Rick
 
On Fri, 19 Dec 2014 18:19:24 -0500, robert bristow-johnson
<rbj@audioimagination.com> wrote:

On 12/19/14 10:06 AM, rickman wrote:
I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS.

i still get the concepts of DDS and NCO mixed up. what are the differences?

One is spelled DDS and the other is spelled NCO.

They're basically the same thing, like 4WD and AWD. The difference
is mostly marketing. ;)


is this a circuit with an analog output? or are you looking at the
stream of samples before they get to the D/A converter?


My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too easy
to get such good results. I guess I'm worried that it will be hard to
measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code.

okay so you're at the samples before they're output to the D/A. instead
of, i presume windowing with a decent window (like a Kaiser, but a
Hamming might do in a pinch), using the FFT and looking for how clean
the spike is, i would suggest a notch filter tuned to the frequency that
you *know* is coming out of the NCO because you know the phase
increment. or is this DDS generated differently than an NCO, like using
some recursion equation? anyway, whatever comes out of that
precisely-tuned, narrowband notch filter is the error signal. if there
are spurs or whatever distortion, it will be in that notch filter output.

The implementation will be synthesizable and the
measurement code will not.

i dunno what synthesizable code is.

Hardware Description Language that can be synthesized to gates or
other hardware.

I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.

i wouldn't bother with the FFT unless you want to run it on the notch
filter output. if you have an FFT in your toolbag, it sounds like your
code is floating point. is that the case? because with "vhdl", that
sounds like it might be a fixed-point architecture.


--

r b-j rbj@audioimagination.com

"Imagination is more important than knowledge."

Eric Jacobsen
Anchor Hill Communications
http://www.anchorhill.com
 
On Fri, 19 Dec 2014 10:06:50 -0500, rickman wrote:

I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS. My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be hard
to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code. The implementation will be synthesizable and the
measurement code will not. I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.

1ppm would be 120dBc, surely... (20 bits)

I believe you can subtract an ideal signal, then FFT the remainder.

You may also want to downconvert to a relatively low frequency so that
you can get a decent bin spacing to examine close-in spurs.

- Brian
 
On Saturday, December 20, 2014 1:55:06 AM UTC-5, rickman wrote:
Updated location of the broken link[2] from that old post:
https://sites.google.com/site/fpgastuff/dds_oddities.pdf
snip
Thank you for the references. The PDF file was especially interesting
with all the plots of effects of PT and AQ.

Just curious, what is up with the AOL thing? I couldn't view the link
without joining. What good is posting content people can't view?

AOL used to provide free FTP space, but silently axed the service about 5 years ago, so the files aren't there anymore; I moved all the stuff I'd posted over the years to that new google sites page. I think that the login redirect you're seeing is just some sort of broken link default for their site.

Allan wrote:
Consider the difference between a regular Spectrum Analyser
and a Phase Noise test set. The Phase Noise test set is really
just a sort of spectrum analyser but it is designed for looking
at low level phase noise.

I first noticed these close-in spurious effects whilst measuring DDS phase noise on a 3048A in the early 90's :)

-Brian
 
On Fri, 19 Dec 2014 18:19:24 -0500, robert bristow-johnson
<rbj@audioimagination.com> wrote:

On 12/19/14 10:06 AM, rickman wrote:
I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS.

i still get the concepts of DDS and NCO mixed up. what are the differences?

According to Wikipedia (under "numerically controlled
oscillator") the NCO is the digital part, which drives a DAC
to make a DDS.

Bob Masta

DAQARTA v7.60
Data AcQuisition And Real-Time Analysis
www.daqarta.com
Scope, Spectrum, Spectrogram, Sound Level Meter
Frequency Counter, Pitch Track, Pitch-to-MIDI
FREE Signal Generator, DaqMusiq generator
Science with your sound card!
 
On Fri, 19 Dec 2014 10:06:50 -0500, rickman wrote:

I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS. My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be hard
to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code. The implementation will be synthesizable and the
measurement code will not. I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.

BTW, you are looking for spurs at -120dBc, not -240dBc.


An FFT is part of the solution, but naively FFTing the DDS output
waveform won't give you good results.

Consider the difference between a regular Spectrum Analyser and a Phase
Noise test set. The Phase Noise test set is really just a sort of
spectrum analyser but it is designed for looking at low level phase
noise. Keysight (used to be Agilent) claim to have a sensitivity of
about -180 dBc/Hz on their top of the line model. That's an awful lot
better than any regular SA. (It also claims to work to 110GHz.)

The trick is to get rid of the carrier before calculating the spectrum.
The FFT only needs to see the noise, rather than the signal + noise.

May I suggest you do the following in your HDL simulation:

1. Generate an "ideal" reference waveform. Use floating point (but use
it carefully).

2. Mix this ideal waveform with the waveform from your simulated DDS.
You can use a real mixer (i.e. a multiplier). The ideal waveform and the
DDS output must be close to pi/2 out of phase. The accuracy of this
phase shift determines the amount of carrier cancellation.

3. Get rid of the 2F component at the output of the mixer, i.e. low pass
filter.

4. FFT the output of the lpf.

5a Spend half an hour scratching your head trying to work out how to
interpret the results.

5b. Decide that the maths is beyond human comprehension. At this point,
you either refer to some HP system journal from last century, or
determine the scale factors empirically by measuring a test signal with a
known amount of phase or frequency modulation.

Allan
 
In comp.dsp Allan Herriman <allanherriman@hotmail.com> wrote:
On Fri, 19 Dec 2014 10:06:50 -0500, rickman wrote:

I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this. I'm mainly concerned with the "close in"
spurs that are often generated by a DDS. My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be hard
to measure such low levels.

(snip)

BTW, you are looking for spurs at -120dBc, not -240dBc.

An FFT is part of the solution, but naively FFTing the DDS output
waveform won't give you good results.

Consider the difference between a regular Spectrum Analyser and a Phase
Noise test set. The Phase Noise test set is really just a sort of
spectrum analyser but it is designed for looking at low level phase
noise. Keysight (used to be Agilent) claim to have a sensitivity of
about -180 dBc/Hz on their top of the line model. That's an awful lot
better than any regular SA. (It also claims to work to 110GHz.)

The trick is to get rid of the carrier before calculating the spectrum.
The FFT only needs to see the noise, rather than the signal + noise.

May I suggest you do the following in your HDL simulation:

1. Generate an "ideal" reference waveform. Use floating point
(but use it carefully).

My choice would be fixed point.

With fixed point, you know exactly how the rounding is done, and it
is done independent of the size of the values at any point in the
computation. You could, for example, use 64 bit fixed point instead
of 64 bit floating point.

2. Mix this ideal waveform with the waveform from your simulated DDS.
You can use a real mixer (i.e. a multiplier). The ideal waveform and the
DDS output must be close to pi/2 out of phase. The accuracy of this
phase shift determines the amount of carrier cancellation.

Pretty much you are computing, and then subtracting, one frequency
(Fourier) component from the signal. You need enough bits (accuracy)
to not have rounding contribute to the result (noise).

3. Get rid of the 2F component at the output of the mixer,
i.e. low pass filter.

4. FFT the output of the lpf.

For fixed point FFT, the values can increase one bit at each stage
of the FFT. On average they will increase by sqrt(2) (RMS), but if
the orginal carrier is still there, you likely get an increase
by a factor of 2 in some bin. If you have enough bits, original
signal resolution plus log2(FFT length) seems to me you could just
run it through the FFT. Well, that might work best if the carrier
was in a single bin.

5a Spend half an hour scratching your head trying to work out how to
interpret the results.

5b. Decide that the maths is beyond human comprehension. At this point,
you either refer to some HP system journal from last century, or
determine the scale factors empirically by measuring a test signal with a
known amount of phase or frequency modulation.

-- glen
 
On Sat, 20 Dec 2014 13:43:55 +0000, Allan Herriman wrote:

On Fri, 19 Dec 2014 10:06:50 -0500, rickman wrote:

I want to analyze the output of a DDS circuit and am wondering if an
FFT is the best way to do this. I'm mainly concerned with the "close
in" spurs that are often generated by a DDS. My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc. Is that right? Sounds far too
easy to get such good results. I guess I'm worried that it will be
hard to measure such low levels.

Any suggestions? I'll be coding both the implementation and the
measurement code. The implementation will be synthesizable and the
measurement code will not. I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.


BTW, you are looking for spurs at -120dBc, not -240dBc.


An FFT is part of the solution, but naively FFTing the DDS output
waveform won't give you good results.

Consider the difference between a regular Spectrum Analyser and a Phase
Noise test set. The Phase Noise test set is really just a sort of
spectrum analyser but it is designed for looking at low level phase
noise. Keysight (used to be Agilent) claim to have a sensitivity of
about -180 dBc/Hz on their top of the line model. That's an awful lot
better than any regular SA. (It also claims to work to 110GHz.)

The trick is to get rid of the carrier before calculating the spectrum.
The FFT only needs to see the noise, rather than the signal + noise.

May I suggest you do the following in your HDL simulation:

1. Generate an "ideal" reference waveform. Use floating point (but use
it carefully).

2. Mix this ideal waveform with the waveform from your simulated DDS.
You can use a real mixer (i.e. a multiplier). The ideal waveform and
the DDS output must be close to pi/2 out of phase. The accuracy of this
phase shift determines the amount of carrier cancellation.

3. Get rid of the 2F component at the output of the mixer, i.e. low
pass filter.

4. FFT the output of the lpf.

5a Spend half an hour scratching your head trying to work out how to
interpret the results.

5b. Decide that the maths is beyond human comprehension. At this
point,
you either refer to some HP system journal from last century, or
determine the scale factors empirically by measuring a test signal with
a known amount of phase or frequency modulation.

Allan

oops, forgot to mention that after you get rid of the carrier by mixing
down to 0Hz (in step 2) and removing the 2F components (in step 3), you
can decimate the signal to reduce the bandwidth.
This allows you to avoid the need to calculate monster FFTs if you're
only interested in the "close in" spurs.

Allan
 
On Sat, 20 Dec 2014 19:43:33 +0000, glen herrmannsfeldt wrote:

In comp.dsp Allan Herriman <allanherriman@hotmail.com> wrote:

1. Generate an "ideal" reference waveform. Use floating point
(but use it carefully).

My choice would be fixed point.

With fixed point, you know exactly how the rounding is done, and it is
done independent of the size of the values at any point in the
computation. You could, for example, use 64 bit fixed point instead of
64 bit floating point.

Rickman appears to be writing a testbench in VHDL. If that is the case,
he already has double precision floating point trig functions built in to
his simulator (in package ieee.math_real). To use fixed point would be
to reimplement and verify the trig functions from scratch - a task that
is possibly harder than the original problem he is trying to solve.

In general though, I do take your point about the rounding.

I would also hazard a guess that Rickman is outputting samples from his
testbench and then using a standalone FFT package (outside the VHDL
simulation environment) instead of trying to code the FFT in VHDL. I
guess this will probably only use floating point.


I was thinking about the size of the FFT. The DDS is an FSM. The output
is periodic. It's possible to match the number of points in the FFT to
the number of states in the FSM, completely eliminating spectral leakage
due to windowing. But I suspect he's using a 32 bit phase accumulator,
which would rule out this approach. (How big can FFTs get these days?
The largest I've ever done had 2**19 complex points, but that was last
century on a Sparc.)

Regards,
Allan
 
On 12/20/2014 11:33 PM, Allan Herriman wrote:
On Sat, 20 Dec 2014 19:43:33 +0000, glen herrmannsfeldt wrote:

In comp.dsp Allan Herriman <allanherriman@hotmail.com> wrote:

1. Generate an "ideal" reference waveform. Use floating point
(but use it carefully).

My choice would be fixed point.

With fixed point, you know exactly how the rounding is done, and it is
done independent of the size of the values at any point in the
computation. You could, for example, use 64 bit fixed point instead of
64 bit floating point.

Rickman appears to be writing a testbench in VHDL. If that is the case,
he already has double precision floating point trig functions built in to
his simulator (in package ieee.math_real). To use fixed point would be
to reimplement and verify the trig functions from scratch - a task that
is possibly harder than the original problem he is trying to solve.

A reasonable assumption although I couldn't find info that said that
reals were double precision (64 bit). In fact, the info I found said
they are only assured to be 32 bit, single precision. Is that wrong?

If the VHDL floating point only has a 24 bit mantissa the resolution is
only slightly better than the signals I am attempting to measure. In
that case I would consider writing out the NCO data to a file for
processing in some other environment. In fact, maybe I should do that
anyway for multiple reasons. I understand there are open source
packages similar to Matlab. I may try using one of these.


In general though, I do take your point about the rounding.

I would also hazard a guess that Rickman is outputting samples from his
testbench and then using a standalone FFT package (outside the VHDL
simulation environment) instead of trying to code the FFT in VHDL. I
guess this will probably only use floating point.


I was thinking about the size of the FFT. The DDS is an FSM. The output
is periodic. It's possible to match the number of points in the FFT to
the number of states in the FSM, completely eliminating spectral leakage
due to windowing. But I suspect he's using a 32 bit phase accumulator,
which would rule out this approach. (How big can FFTs get these days?
The largest I've ever done had 2**19 complex points, but that was last
century on a Sparc.)

Once I find the spurs in an FFT, I can narrow down the search to
selected bins and use a DFT.

--

Rick
 
On Sun, 21 Dec 2014 01:28:00 -0500, rickman wrote:

A reasonable assumption although I couldn't find info that said that
reals were double precision (64 bit). In fact, the info I found said
they are only assured to be 32 bit, single precision. Is that wrong?

That's a good point. It's implementation dependent.

The old version of Modelsim that I have on this computer has this in the
source for the std library:

type real is range -1.0E308 to 1.0E308;

which is equivalent to 64 bit "double". I don't imagine that any
mainstream compiler would use less than 64 bits for real, but I could be
wrong.
OTOH, if you know that all the compilers you're using support 64 bit,
it's probably safe to rely on that.

Regards,
Allan
 

Welcome to EDABoard.com

Sponsor

Back
Top